AR Library Final Report

Google summer of code 2018

The Processing Foundation

Final Summary:

Student : Syam Sundar K
Project : ARCore Renderer for Processing Android

This blog summarizes all the experiments that I've done for the implementation of the renderer, the design decisions that I've taken, the conflicts that I ran into, places where I sought help from my mentors, how I fixed some of the hard to fix bugs, how I summarized outcome of each week through my weekly blogs, visual outcome of the project, how I scheduled my work for 3 months, the current stage of the AR Library and the things that I've learn through GSoC'18 and from the Processing Foundation.

First of all, My sincere thanks to Jesus Duran and Andres Colubri, for their excellent mentorship. They immediately responded, and assisted me whenever I needed help. I’m grateful to the Processing team, community members and my fellow GSoC colleagues for their help.

My experience with Processing Foundation was one of the best development period I've had so far - all the community members were very friendly and encouraging. The discussion forums are more engaging. Mentors were very kind and I got in touch with them quite often. I also had a number of new friends from the community who are fellow GSoC participants for Processing Foundation.

I started my venture with processing by fixing small issues in the Processing Android code base then I became familiar with the code base in a shot spell and I wanted to contribute more and I found that ARCore Renderer for Android was one of the prioritised requirement of the repo through GSoC and since I had experience with AR I was able to crack it though the selection process.

Community Bonding:

Community Bonding was quite helpful because there is when I got much more understanding of the code base and also started interacting with my mentor more comfortably. My mentors provided top knotch support by pretty much clearing all the clarifications that I had. With the help of mentor and by little bit of research I was able to successfully set up the development environment where I'll have to implement the AR Library.

We would interact through Gitter and initial Andres gave me a clear idea by pointing out how the VR library was implement and it gave me a clear idea of how to build the library from the ground up.

Design decisions:

Initally, I explored a number of native libraries like VR and a number of contributed libraries like Ketai (which is a library from my mentor Jesus) where I found the project structure and how the entities and classes should fall in the AR Library.

As I was a bit new to developing libraries, I started keeping the AR Library under contributed libraries and later once it's made a bit consistent I moved it as one of the inbuilt libraries in Processing Android that comes along with the Android mode.

I made a clear cut explanation in my #2 blog on how the main AR Renderer will be broken down into a number of sub-renderers which on tying up together gives the expected AR functionality. Following that, we had a couple of evaluations where my mentor gave my a lot suggestions on how to carry on with furture implemention and also asked me to rise issues and create milestones and so on so that others as well could keep track my progress.

Later, decisions such as what methods that the library exposes were made also the examples that was created for the library explains the exposed calls though comments.


I was regular in giving updates to my mentor in the progress on my project through weekly blogs and visual outcomes through YouTube videos:

Blog #1:
Blog #2:
Blog #3:
Blog #4:
Blog #5:
Blog #6:
Blog #7 & #8:

Also mentors have given suggestions on making further progress after each blog.


Initially while talking off with the library was comfortable for me since I had enough time during the community Bonding period. And I was able to successfully build and deploy the AR Library with a week and half or so where we used ant build system.

Implementation of background , plane and point clouds went smooth and also methods were created for texturing and coloring planes thorugh the processing sketch which functions by relaying on the AR Library itself.

Then we began with the integration phase where we had to include sketch.handleDraw() which in turn calls the draw() in the sketch. Only when this is done the shapes and 3D entities drawn through the sketch such as box() and sphere() appears on the AR scene, but the already implemented renderers .i.e., plane, point cloud and background is unaware of this Integration there were a number of conflicts that were encountered:

1. Initially the background renderer remained void or was showing tint Proportional to the scene that the camera was pointing at.
2. The background renderer was frozen by the integration initially. The initial frame when the initialization is done is maintained throughout the activity lifecycle and the frame didn't update with the frame of the current camera image.
3. The object remained un-aligned with the virtual equivalents (world co-ordinates) initially.
4. On placing primitive shapes in the scene and on translation, only the wireframe moved and not the entire object.
5. The anchoring of the object with the plane was no so consistent.
6. The Matrix math which involves the modification of the MVP matrix to handle transformations were hard to figure out.

Visual outcomes:

Since the library is an AR Library it involves a lot of visual outcomes. Therefore, I used to summerise the work that I've done for a span or 2-3 weeks in the for of a short YouTube video.

Outcome #1 :
Outcome #2 :
Outcome #3 :
Outcome #4 :
Outcome #5 :

Schedule & Github Links:

The schedule for the entire coding period was given along with my proposal and I pretty much stuck with the same till the end. In fact, I was a bit fast in the initial stage and was able to accomplishment more than I planned to do each week. As the library progressed through complex areas there was a bit slow down and alterations in my schedule to fit the work periods because of the complexity of the implementation but never-the-less those alterations didn't impact much.

Commit history :
GitHub repo :
Pull Request:
Commit Graph :
Utils repo (ant build) :
Utils repo (Gradle build) :

Current stage of Library:

The library is fully functional which allows the creation of AR apps using processing sketch this works perfectly fine when the object that has to be placed in the AR scene is are in the form of .obj file in the sketch's dir ( refer ImportObj under examples of AR Library ) or in the form of native Processing commands like box(), sphere() and so on, to be abstract for all PShapes. Refer the releases in the library's repo for further information.

Achieved so far:

1. Sub renderers:
- PBackground.
- PPlane.
- PPointcloud.
- Pobject.
2. Building the library using Gradle (ant previously).
3. AR Library release v1.0.
4. Examples for the library.
5. Documentation to build the library.

Things I've learnt:

Non Technical:

1. Working with a huge code base.
2. How to work with an organization.
3. Healthy and regular interaction with mentors.
4. How to maintain guidelines during coding sessions.
5. Documentation.
6. Managing both academics and project work.
7. Commitment to work hours-to-gether.


1. Deep understanding of ARCore.
2. Working with OpenGLES.
3. How to manage migration of build system of the project.
4. Maintain multiple branches during the implementation of new features.
5. Java, Gradle, Rendering, Processing, handling call backs,etc..

That's all for GSoC'18. Thanks for reading.!