Rewriting Stardog.js
Get the latest in your inbox
Get the latest in your inbox
The JavaScript world has changed since we built stardog.js, the open source connector to Stardog. Now it needs some love.
TLDR: We made it better; GraphQL is coming; and we just released version 1.0.0-rc1
on npmjs.
When I applied for a job at Stardog, after having several conversations with the leadership team and other engineers, my “homework” assignment was to do a code review of the stardog.js library.
I wrote a six or seven paragraph technical review, got the job, and never really thought about my review again.
It all came full circle when we needed to use stardog.js for a new project. If you’ve never used it, stardog.js is a library to make communicating with the Stardog HTTP server easier. It is considered a “universal” JavaScript library, meaning that it should work the same in Node and in a browser. It’s a useful library to be sure, but it was showing its age.
The first and biggest issue was it wasn’t really compatible with modern
front-end tooling. The team was using webpack and ES6 import
statements to
load stardog.js into a small React app. I was having trouble getting one of the
stardog.js calls to work correctly, so I started to litter stardog.js with
debugger
statements. Not a single one of them was being hit when making XHR
requests back to the Stardog server. After 20 minutes of confusion, I realized
what was happening; we were using the “node side” of the library.
A quick look at the following lines of code revealed the issue:
var isNode = (typeof exports !== "undefined" &&
typeof module !== "undefined" &&
module.exports);
Using webpack, module
and module.exports
are always defined, so the code was
taking the isNode
path and using all the Node libraries instead of the
browser-based ones. Everything was functioning, but certainly not as intended.
Behind the scenes, stardog.js uses
restler for making HTTP calls in the Node
execution context. It was supposed to be using jQuery for XHR calls in the
browser.
The application we were building only leveraged the basic functionality provided by stardog.js. It worked well enough for that, but any non-trivial project would eventually run into bad times. Additionally, like it or not, pretty much every front-end project now uses some kind of bundler tooling so stardog.js has to work correctly in those contexts, too. With this as the catalyst, it was time to start a rewrite of stardog.js
Good thing I kept that technical review as it became a springboard for the work we needed to do to modernize stardog.js. Our three primary goals for the rewrite include:
fetch
, the standard way to make XHR requests from the
browserWith the rise in popularity of Node and React, there has been an expectation shift in the web development community. JavaScript should run the same regardless of the environment. There are certainly exceptions (HTTP servers, file system operations, animations) but, in general, this is the new expectation.
The previous version of stardog.js achieved this by using feature detection,
checking for module.exports
. Detecting environments this way is unreliable and
should be avoided whenever possible. It is better to leverage tooling that
handles that for you.
The first thing we did was write the library for Node using standard CommonJS
require
statements. This allowed us to structure the code in a much more
modular way. One catch of the Node-first approach was that we couldn’t use any
of Node’s core modules like url
, http
, or querystring
because they don’t
exist in the browser. Any modules would have to be available via npm and be
universal.
Utility libraries like lodash, for example, are inherently universal because they don’t require any environment-specific features. Other libraries like form-data use a special, technically-not-spec-but-basically-spec field in package.json named “browser”.
The “browser” key instructs builder tools what file to use when they are traversing the dependency tree. This ends up being a much more reliable approach than feature detection as library authors generally know what JavaScript files they want to be used in which context. It does rely on builder tools respecting that field, but all of the main players in the space currently do.
There is a third approach to universal JavaScript and that is the idea of
polyfills and ponyfills. Both techniques are essentially fancy-talk for “if this
functionality exists in the current environment, use the native implementation,
otherwise, use the supplied code” This is how fetch
works, for example.
Something that has gotten the JavaScript community very excited is a standard,
Promise
-based way to make XMLHttpRequest
(XHR) requests. For many years
there have been many competing ways but nothing even as a de facto standard.
Enter fetch
. A first-class XHR request machine. At the time of this
writing, 75% of global browser usage
supports native fetch
implementation. For the other 25%, there exists
an excellent polyfill maintained by the fine
folks at GitHub.
There is a lot of code there to digest, so here are the salient bits. The first
thing the code does is check to see if fetch
is currently implemented in the
current environment (the browser in this case); if so, bail out and do nothing.
This causes all calls to the fetch
API to use the native implementation.
If fetch
isn’t implemented, it is reimplemented using XMLHttpRequest
and
made globally available. This is an ideal setup because in your user-land code
you can always use fetch
and not worry about native or polyfill; to your
code, it is all the same.
Taking this one step further, we want to use fetch
on the browser, whether
it’s native or not, and use fetch
in Node,too. That is, we want a single
developer API that’s completely context agnostic. Thankfully, there is a fetch
implementation that is spec compliant (mostly) written for Node
called–surprise, surprise–node-fetch.
By combining all of these fetch
related concepts, stardog.js is able to use
fetch
to make all of the requests to the Stardog HTTP server using the same
API and code regardless of the environment. If you look at the new code
currently in the “development” branch, you won’t see any branching logic trying
to guess how to make HTTP requests. It is just all fetch
all the way down.
So we wrote stardog.js with the intent of being universal, but the actual code
is littered with require
statements, which mean less than nothing in a
browser. While the code we wrote is environment agnostic, we still need to do
something so that it can function as expected in the browser.
We needed a tool to read our code, in-line all of the dependencies, and create a single JavaScript file that can be used in the browser. We selected rollup.js because it’s more tailored for small libraries rather than web applications.
In our case, rollup let us write stardog.js using the standard require
and
CommonJS semantics we are used to, but then execute a build step to bundle all
of the dependencies into something a browser can execute.
And it’s during this build phase that the “browser” field in package.json comes
into play. When rollup follows a require
statement, if it’s an npm package,
and it has a specified “browser” field, that is the file that will be used to
satisfy the require
statement.
Besides just flattening out require
statements and inlining dependencies,
rollup.js lets us use many es2015 features not available in some browsers like
Class
, object destructuring, default arguments, and arrow functions. These
features are “transpiled” with babel down to ES5 which is what all modern
browsers natively support.
As you can see, in our package.json file, we’ve added a “browser” key pointing
to the built stardog.js file in the “dist” folder. So when you want to use
stardog.js in your project, just require
or import
it in and you’ll be
communicating with the Stardog HTTP server faster than you can say SPARQL.
The stardog.js roadmap includes a mix of boring housekeeping tasks and some really exciting new features. Some of the housekeeping tasks include:
Typical housekeeping tasks. The interesting bits show up when we start talking about new features.
I’m going to let you in on a little secret…GraphQL is coming to the Stardog HTTP server. The existing Stardog REST API is going to be migrated to GraphQL.
First benefit of this move: re-using your existing GraphQL knowledge and skill set when building Stardog apps. Win.
Next, as this change is rolled out to the many (many) RESTful endpoints,
stardog.js is going to migrate along with it. Rather than pre-packaged URIs and
HTTP message bodies, it will become a collection of preconfigured GraphQL
queries and APIs for building GraphQL queries that can be dispatched using
fetch
.
Another benefit of moving to GraphQL is that developers will be able to query
whatever they want, provided it is exposed in the GraphQL schema. You won’t have
to wait for stardog.js to add a method to expose new-feature-foo
via a method
call on stardog.js; developers will be able to query against the GraphQL
endpoint directly for the information they need.
Rewriting stardog.js was a much bigger job than we originally thought it was going to be. It’s been a great experience for us to go deep into an unfamiliar code base and to try to understand and improve it. We also got to experiment with new tooling and to dig deep into how JavaScript libraries are bundled and distributed.
These changes add up to a better and more fun developer experience when programming against Stardog. We encourage pull requests of all kinds and welcome feedback and suggestions from the community.
Download Stardog today to start your free 30-day evaluation.
How to Overcome a Major Enterprise Liability and Unleash Massive Potential
Download for free