The second edition of Full Stack Fest took place in the city of Barcelona on early September, and Akuaro World was there. We already talked a bit about our impressions of the event, but we also had the opportunity to ask a couple of questions to some of the speakers that gave talks during that week, and now we leave you with the answers from one of them. You can check the three other interviews in our site, as well as watch the keynotes themselves in Codegram’s YouTube channel.
Apart from loving and being an advocate of the open web, Martin Splitt also works at Archilogic helping everyone build 3D and VR worlds. At his talk at Full Stack Fest, Rendering Performance from the Ground Up, he covered the subject of computer graphics: what it really means to say “GPU accelerated”, and what really goes into “painting pixels onto the screen”. He’s been to Barcelona several times and really enjoys the city and its tapas bars.
How would you describe your position at Archilogic to a non-coder?
So at Archilogic, we want to help everyone build 3D and VR worlds on the web with approachable technologies, such as HTML and A-Frame.
Part of my job is to make sure people outside of the office are heard; if something’s hard or impossible to do for you, I want to hear that and I’ll see what we at Archilogic can do about that.
Also I’m sharing what we’ve learned with the broader community at conferences, in workshops and articles— hopefully inspiring someone to build on top of that and contribute back their learnings to the community, too. Then, last but most importantly, I’m helping my team be the amazing, creative and wonderful folks they are and build stuff for 3d.io— a collection of APIs and components to build 3D and VR web content.
Is there a common misconception related to graphics tech you’ve noticed among consumers/clients? What’s the real explanation, in that case?
People ask me what they should use for VR on the web: WebVR or WebGL. And that baffles me every single time. WebVR and WebGL are orthogonal— they complement each other. WebVR does not do rendering itself. It does give us access to the hardware to send pixels over and read from the sensors, but the pixels have to be drawn with something else— and WebGL is often a reasonable choice as it gives us access to hardware-accelerated 3D rendering.
What led you to do this talk about GPUs? Is there a part of that process that you find especially interesting? Is it because of your work experience? Both?
So back in the days when the Voodoo2 graphics card was the bee’s knees, I started with game development in my free time. I built basic 2D games but kept wondering about 3D until I got a chance to work a bit with DirectX. I didn’t particularly like the API and as I was switching to Linux at the time, got into OpenGL instead. What surprised me was the vast world of new terminology and complexity that this opened up. While OpenGL is still pretty high level, it’s asking you for a fundamental understanding on how the rendering pipeline works.
That’s also where a lot of performance myths come from— we get a superficial explanation along the lines of “Do this, don’t do this” without explaining why that is.
I wanted to change that by showing that browsers, in the end, run a graphic system that can go multiple ways from text to pixels— and some of these ways are faster than others, because some of them go through the GPU. Now, not all of them do, because that has a cost as well, but I think with new render engines like Servo, we’ll see interesting new ways of driving that cost down and unlocking more of the benefits for us in day-to-day web browsing.