Studio Performance

Apr 19, 2018, 15 minute read
Stardog Newsletter

Get the latest in your inbox

Performance in Stardog Studio matters a lot. Here are some of our secrets.

Download Stardog Studio to start building the Knowledge Graph today.

JavaScript Considerations

One of our goals with Stardog Studio is to make it the Knowledge Graph IDE, i.e., a one-stop shop for working with Stardog’s core languages and features. To achieve this, we need to ensure that Studio remains performant and responsive. In this post, I’ll talk about some of the techniques we use to ensure this, and I’ll mention some possibilities going forward, too.

Studio is written in TypeScript and transpiled to JavaScript. It uses libraries like React, React Router, and Redux to manage UI structure and application state, and it uses Microsoft’s Monaco editor for text editing. Finally, it runs as an Electron application. This stack provides a ton of benefits (a rapid development cycle, a great library ecosystem, transpile-time type-checking, and so on), but it presents us with some challenges, too:

  • First, since JavaScript is single-threaded, there aren’t necessarily obvious ways to prevent intense data-processing tasks from blocking UI tasks and rendering the UI unresponsive.
  • Second, because Electron uses the Chromium browser’s rendering engine, it inherits all of the possible performance pitfalls of the traditional DOM (e.g., fully synchronous re-layouts of the entire document when reading certain values).
  • Third, because Redux doesn’t programmatically enforce any particular rules concerning how you connect your React components to your Redux store, and because React generally encourages a top-down approach (and Redux used to, as well), you can run into performance bottlenecks if you combine these two technologies without reorienting your thinking about state at least slightly.

In what follows, I’ll walk through some specific cases where each of these challenges arose for the Studio team, and explain our techniques for resolving them.

Data Processing in Web Workers

In Studio, querying a database is one of the primary actions taken by users. The typical user types a query into the editor, hits the “Execute” button (or uses the Cmd/Ctrl+E keyboard shortcut), and expects to see results quickly. A performance challenge arises here almost immediately. What if the set of results is very large – say, hundreds of thousands of rows of data? How do we get that data from Stardog and make it all available to the user in a nicely-rendered, navigable, responsive table?

The single-threaded nature of JavaScript can be a real obstacle here. The obstacle can’t be overcome just by not rendering all of the data at once, either (which might otherwise be the obvious way to go). That’s because various operations that involve processing the data (e.g., parsing it into JSON, formatting it appropriately, sorting it, filtering it, etc.) can themselves be massively time-intensive (we’ll look at some measurements that show this, below).

To overcome this obstacle in Studio, we make judicious use of web workers and a custom JSON protocol for inter-worker communication (very roughly along the lines of JSON-RPC). For example, instead of doing the aforementioned data-fetching and JSON.parse-ing operations in the main thread (thereby blocking the UI), we send a small, JSON-formatted message to a dedicated web worker, telling it to do the fetching and processing for us. This lets us take advantage of the fact that the browser is multi-threaded even though JavaScript isn’t. The worker completes its processing tasks in a background thread, freeing up the main thread to do other things until the results are really ready. Here’s a contrived (greatly simplified) example of how it works in code:

// -- main-thread.ts --
/**
 * Sends a request for data to our dedicated web worker using our
 * JSON-based protocol. The single parameter should be an object
 * with information concerning the stardog.js API method to call,
 * as well as any arguments to pass along to it. The method returns
 * a Promise that is resolved when the worker sends back the
 * requested data.
 */
const getStardogDataWithWorker = async (rpcMethodData: StardogApiMethodData) =>
  new Promise((resolve, reject) => {
    const eventId: string = uuid();

    // Add a unique, one-time message listener that cleans up after itself when
    // it receives the requested data.
    let dataReceiver = (event: WorkerResponseEvent<StardogApiResponse>) => {
      if (event.data.eventId !== eventId) {
        return; // noop
      }

      // Clean up -- ensures no memory leaks.
      worker.removeEventListener('message', dataReceiver);
      dataReceiver = null;

      // ...Error handling code removed...

      resolve(event.data.parsedResponse);
    };
    worker.addEventListener('message', dataReceiver);

    worker.postMessage({
      eventId,
      type: WorkerRpcEvent,
      method: 'callStardogApi',
      methodData: rpcMethodData,
    });
  });
};
const response = await getStardogDataWithWorker({
  apiMethodName: 'query.execute',
  args: [connectionData, dbName, 'select distinct ?s ?p ?o { ?s ?p ?o}'],
});

// -- worker.ts --
const rpcMethods = {
  async callStardogApi(ctx: Worker, eventId: string, methodData: StardogApiMethodData) {
    const { apiMethodName, args } = methodData;
    // Fun little functional way to reduce 'query.execute' (etc.) to
    // the actual method on stardog.js: stardog.query.execute
    const apiMethod = apiMethodName.split('.').reduce(
      (methodAccumulator, pathPart) => methodAccumulator[pathPart],
      stardog
    );
    // This now happens within the worker's thread. All JSON parsing occurs here:
    const parsedResponse = await apiMethodName(...args);
    ctx.postMessage({
      eventId,
      parsedResponse,
      type: WorkerRpcEvent,
    });
  },
};

(<WorkerGlobalScope>self).addEventListener(
  'message',
  (event: StardogAPIEvent) => {
    const { method, methodData, eventId, type } = event.data;
    rpcMethods[method](self, eventId, methodData);
  })
);

This by itself works well enough for reasonably-sized sets of data (say, fewer than 10,000 rows). However, for very large result sets, the browser engine itself begins to “choke” when the parsed data is sent back from the web worker, and still ends up blocking all rendering. In this case, even though the web worker has fetched and parsed the results off of the main thread, the act of sending such a large amount of JSON back to the main thread is too much work for the browser to handle at once.

By the way, since Studio runs as an Electron application, why not just use communication between the renderer process and the main process for this sort of thing, instead of using web workers? There are at least two reasons. One is that JS developers simply tend to be more familiar with web workers than with Electron-specific inter-process communication, so doing it this way makes it easier for others to contribute in the future. Another is a bit more forward-looking: the web worker code could be used in a purely browser-based application, whereas the alternative could not, making this approach more versatile.

Enjoying the Payoff

Let’s look at two performance timelines showing the effect. In the first image below, the result set includes nearly a half million rows (sets of triples). The parsed results are sent back from the worker around the 9000ms mark. At that point, processing (the yellow line on the timeline) begins, and painting (the green line on the timeline) completely flatlines. As you can see, the browser is completely unable to paint for over six full seconds, and thus cannot respond to any user interactions. The app is effectively frozen for an excruciating amount of time.

For comparison, here is the same kind of timeline when the result set is a bit smaller, at 100k:

In this scenario, painting/rendering still flatlines for over a second – much less than six, but still not a great experience.

To get around this obstacle, we added to our inter-worker protocol the ability to stream small chunks of JSON data. In cases where the result set is greater than our pre-defined chunk size, something like the following method gets called:

const streamResults = (eventId: string, resultSet: ResultSet) => {
  const resultChunkToSend = resultSet.slice(0, chunkSize);
  const remainingResults = resultSet.slice(chunkSize);
  const isResponseComplete = remainingResults.length < 1;

  worker.postMessage({
    eventId,
    isResponseComplete,
    type: WorkerRpcEvent,
    partialResponse: resultChunkToSend,
  });

  if (!isResponseComplete) {
    setTimeout(() => streamResults(eventId, remainingResults), intervalMs);
  }
};

This method sends a chunk of the parsed results back to the main thread, and then, if additional results remain, recursively queues up another round of chunk-sending for the remaining data. Importantly, the queued-up chunk-sending is executed asynchronously (via setTimeout, since we don’t have access to requestIdleCallback or requestAnimationFrame in web workers). Iterating synchronously through the result set wouldn’t resolve our rendering obstacle, because it would still entangle the browser in extensive processing without ever “backing off” to allow it to handle painting/rendering and user interactions. Streaming chunks asynchronously, on the other hand, allows the browser to “breathe” in between chunks, handling UI events and keeping things performant and responsive.

As a result, Studio is able to handle even very large result sets without locking up intolerably. For comparison, here is a performance timeline for Studio’s handling of the half million results with asynchronous streaming enabled:

Here, painting is never blocked entirely (this is hard to see on the image, but the apparent “flatline” around 29000ms includes at least some activity beneath the line), and it is slowed significantly for only about 1.7 seconds, vs. the 6+ seconds of total blocking required before.

Next, here is a performance timeline for the 100k result set with asynchronous streaming:

Once again, rendering never fully blocks, and any significant processing only takes 180ms (a video game-worthy response time) instead of the 1+ seconds required previously. In this case, the delay is often unnoticeable.

That said, using the above techniques has allowed us to eliminate the UI-blocking issues caused by large result sets almost entirely. This is a win not just for us as developers, but also for users of Studio. What’s more, we have plans to totally eliminate any remaining delays in the near future by augmenting our protocol to allow the web worker to send back only the necessary chunk(s) at a given time, instead of always sending back all of the chunks constituting a result set. This should be the final piece of the puzzle for this aspect of Studio performance, and we recommend our approach to other developers encountering similar obstacles.

The Thing About Notes

In our current version of Studio, we render an arbitrary number of query editors as “notes” in a “notebook”-style view (although, based on user feedback, we are likely to change this in the near future, switching to a tabbed view). A user of this notebook might create a number of notes – say, 10 of them – and enter different queries into each one. Suppose that a user does this, navigates to another section of the application (e.g., “Databases”), then navigates back to the notebook. What would happen at that point (assuming a fairly traditional way of using React and React Router) is that the Notebook component and all of the notes within it would unmount upon navigation to “Databases” (removing the DOM nodes associated with the notebook), then re-mount on navigation back.

This behavior is often fine, but it wasn’t performant enough in Studio. Each note in the notebook has its own query editor (since each one may have different content, and many may display at once), and each editor is an instance of the Monaco editor. The Monaco editor happens to do some (currently unavoidable) DOM-related calculations whenever it initially mounts a DOM node, such as resizing itself to fit its container, and so on.

Some of these operations force the browser engine to synchronously recalculate layout and styling for the entire document (called “DOM reflow”). Because this happens on the UI thread, and because it is synchronous, all UI-related operations are blocked when this reflow occurs. And because there may be 10 or more instances of the Monaco editor rendering in the notebook, this reflow could, disastrously, happen multiple times in sequence, resulting in a noticeably unresponsive UI (or UI “jank”).

To keep the notebook performant in this case, we used two techniques: incremental rendering, and a combination of some good old-fashioned CSS and React Router features.

Incremental rendering in this case works like so. When the notebook component mounts, it is aware of the number of notes that need to render. Instead of rendering them all at once, it first renders a small number (say, 2), and then queues an additional render (if there are still more notes that need to render) to happen only when the browser is ready to paint again (using a combination of requestAnimationFrame and setTimeout). This queueing results in some more “back off” on the UI thread, allowing other UI operations (e.g., rendering of other elements and responding to user events) to occur.

Here’s a small snippet of simplified code that conveys the general idea:

class Notebook extends React.Component<NotebookProps, NotebookState> {
  private incrementalRenderAnimationId;
  private renderRequestTimeoutId;

  constructor(props: NotebookProps) {
    super(props);

    this.state = {
      // Changing this state triggers an update, which will render additional
      // notes if this number is incremented.
      numNotesToRender: 2,
    };
  }

  /* ... */
  componentDidMount() {
    // Queue rendering of the next batch of notes, if necessary.
    // We use a combination of setTimeout and requestAnimationFrame because
    // the Monaco operations on each animation frame are sometimes intense
    // enough that we need the additional back-off from setTimeout,
    // but we don't want to *just* use setTimeout, since that
    // doesn't coordinate with the renderer's ability to paint.
    this.renderRequestTimeoutId = setTimeout(() => {
      this.renderRequestTimeoutId = null;
      this.incrementalRenderAnimationId = requestAnimationFrame(
        this.incrementallyRenderNotes
      );
    }, 60);
  }

  // This method is called when `requestAnimationFrame`
  // receives a frame for execution.
  incrementallyRenderNotes() {
    cancelAnimationFrame(this.incrementalRenderAnimationId);
    this.incrementalRenderAnimationId = null;

    /* ... */

    if (this.state.numNotesToRender < this.props.noteIds.length) {
      // Triggers an update + new render with more notes
      this.setState((prevState) => ({
        numNotesToRender:
          prevState.numNotesToRender + incrementalRenderIncrement,
      }));
    }
  }
  /* ... */
}

While this alone improved performance significantly, we discovered that it wasn’t always enough. Instead, we found that there was occasionally still a perceivable delay in switching between app sections, even when only one or two Monaco editors were rendered in total. We wanted there to be no perceivable delay. So we turned to React Router and CSS.

React Router’s Route component can render the component for a route using one of three props: component, render, and children. The documentation for the children prop says the following:

Sometimes you need to render whether the path matches the location or not. In these cases, you can use the function children prop. It works exactly like render except that it gets called whether there is a match or not. . . . This allows you to dynamically adjust your UI based on whether or not the route matches.

We realized that we could use this prop to mount the Notebook component only once (when Studio loads), then leave it mounted, but use CSS to make it completely hidden when the user navigates to another route. Here’s the basic idea:

// -- in the router component --
/* ... */
<Route
  exact
  path={'/'}
  children={(props) => <Notebook {...props} />}
/>
/* ... */

// -- in the Notebook component --
/* ... */
render() {
  return (
    // Leave the container mounted, but hide it completely (and do no further processing)
    // if the current route does not match the notebook route.
    <NotebookContainer style={!this.props.match ? { display: 'none' } : {}}>
      /* ... */
    </NotebookContainer>
  );
}
/* ... */

Once again, this technique paid off greatly in terms of UI responsiveness and (thus) user experience. The notebook renders snappily regardless of the number of notes, and regardless of the way that the user navigates the application. Another performance win!

Connecting Components

Our last performance obstacle for this post involves the combination of React and Redux. As I mentioned earlier, the Redux documentation used to recommend connecting “top-level” components to the Redux store and passing down the retrieved data to other components via props. React also encourages this pattern to some degree, which it refers to as “lifting state up.” This is generally a great pattern, but, when taken to extremes, it can get you into trouble.

Consider the notebook one more time. Broken down into individual React components, its structure consists in a tree with many nodes. There is the NotebookContainer, then the Notebook, then each Note, then each Editor within the note, each ResultTable within the note, and so on.

Were we to connect only the “top-level” Notebook itself to the Redux store, state updates would trigger a number of unnecessary cascading re-renders (given that React’s default behavior is to re-render the children of any component that re-renders itself). This is true even if we were to connect each Note, instead (in that case, an update to the note’s state would trigger re-rendering of both the note’s Editor and the note’s ResultTable, which would often be unnecessary and inefficient).

To address this, we developed a rule according to which we connect a component to the Redux store when and only when the component needs access to state that no direct ancestor needs to access. The Notebook needs access to the IDs of notes in the store, for instance, since it needs to know how many notes to render as children, how to order them, and so on. It doesn’t need much else, though (for example, it shouldn’t care about the contents of the queries within the notes). Instead, each note’s Editor should be connected to the store, in order to get information such as its query, the database against which the query is being written (so that it can auto-complete prefixes in that database), and so on.

Following this rule pays off performance-wise in two ways.

First, changes to any properties of an editor within the notebook trigger re-renders that are constrained to that editor alone. The notebook itself will only re-render very infrequently, when the note IDs change in some way.

Second, the Notebook component’s state becomes much simpler to memoize (using, e.g., reselect – see the chat log here for some enlightenment related to this). Unnecessary processing can be totally short-circuited via a few strict equality checks between string IDs.

So for React/Redux applications with deeply-nested trees of components, we propose our above rule as a guide: connect any component that needs access to state in order to render, no matter how “low” it is in the tree, so long as its direct ancestor does not also need access to that bit of state (if it does, connect the ancestor instead). Happily, we discovered while writing this blog post that the official Redux FAQ is now a bit more in line with this rule, saying (emphases added),

For maximum rendering performance in a React application, state should be stored in a normalized shape, many individual components should be connected to the store instead of just a few, and connected list components should pass item IDs to their connected child list items (allowing the list items to look up their own data by ID).

Conclusion

Studio is still a fairly new project at Stardog, so it’s worth noting that it’s still in its early stages. Even so, employing the techniques described here has allowed us to get further faster than we might have otherwise. Some of these techniques and bits of guidance should be helpful to other developers, as well. At Stardog, we plan to continue to use and to hone them to make the Studio experience top-notch. In the meantime, we’re always open to feedback and suggestions, and would love to hear from you on our community forums.

Try Studio out for yourself here.

download our free e-guide

Knowledge Graphs 101

How to Overcome a Major Enterprise Liability and Unleash Massive Potential

Download for free
ebook