What better way to map your mind than in VR? Noda lets you do just that.
Hello there! Tell us who you are, and what you do in relation to Noda.
I’m Brian Eppert, developer of Noda. Aside from the logo, the incredibly helpful feedback from early testers and the epic work of past and present geniuses to create the VR tech stack, I’ve done everything to bring the app out. It’s pretty amazing that today a single person can connect with users worldwide to release a VR app for sale or subscription in a few months.
I’m a tinkerer and a good programmer but a bad artist, so the asset and engine resources available and the VR native app dev process is a huge boon. Much better now even then when I started with VR in 2014.
What’s the origin of Noda? How did you come up with the initial idea?
I’ve always pictured things in 3D as a way to understand them. When I explain something I’m always gesturing around trying to draw a picture to describe my thoughts. I even came to realize I needed to do it backwards, otherwise someone facing me would see the mirror image (I’m not sure it’s ever helped!). I relate to things spatially and I know I’m not the only one who does.
As for Noda I wanted to combine the immersive environment and natural interface of hand and body movement with structured information management. It seems like a good use for this technology. I’m sure VR will help make sense of big data and analytics but my feeling was to start small, with data more within personal reach like what you might draw on a whiteboard or picture in your head.
The nodes, tags and lines that make up Noda’s form is known as a Labeled Property Graph, and it’s inspired by Neo4j, an alternative database platform I’ve been interested in for years. Their big thrust is that the relationships between things are as important as the things themselves.
While VR, especially on Vive, is all about 3D spaces and experiences, what made you feel that Noda had to be 3D, instead of (for example) a very large 2D space manipulated in 3D?
For math and programming and creative writing I’d always use paper or a whiteboard to sketch out ideas and communicate with others. It works initially but the paper or whiteboard’s always too small and on a flat screen you can’t very well draw behind or in front of something and maintain clarity. A lot can be done with up/down/left/right but sometimes things need to sit next to each other in a different way so you really need that third dimension.
I never really tried much software for drawing, for me the mouse and keyboard interface gets in the way of direct expression. I guess a tablet/pencil might be nice for the pan/scroll/zoom and the undo/redo but it still feels like an unnatural projection of 3D stuff on 2D, like when you see the globe of Earth or Moon chopped up and spread out across a big wall-sized poster.
What kind of projects, aside from mind-mapping, do you see Noda being used for?
As a app for creating thinking or something I’m calling ‘Associative Concept Modeling’ I could see it being used for any project that needs a plan, or a phase of ‘figuring out’ separate from the activity itself.
I’ve heard from people who are using it to plan video presentations to their clients, modeling an ERP data flow, planning a website redesign project and more.
You have mentioned ‘kinesthetic thinkers’ in relation to Noda – can you tell us what that is, and how it applies to Noda?
Well that’s based on the concept of learning styles from psychology and education, there seem to be a lot of models and theories for how we think and learn and how some people favor one or the other, but more likely we switch between modes depending on the day or task at hand. The main categories I’m familiar with are: Auditory-Sequential, Visual-Spatial, and Kinesthetic (meaning body movement).
As a pithy proverb explains it: “I hear and I forget; I see and I remember; I do and I understand.”
Two things that have come up for me lately are that our cognition may be entirely based on metaphors developed from our sensorimotor system. Got tipped off to this from Voices of VR podcast #515 mention of “Philosophy in the Flesh”. I also read about a study done that shows babies who don’t have use of their hands to make gestures are slower to learn language. Conversely non-deaf children who learn sign language are more advanced at learning language and on other measures.
Something about moving is tied in to memory and learning and thinking. It’s also a bulwark against the disease of neglect of the body that happens when we sit at a desk, hunched shoulders and shallow breathing. When using Noda it feels great to put on the headset, push the chair back out, stand up and move around for a bit.
What can be done in 3D mind-mapping that can’t be done in 2D mind-mapping?
VR taps in to the full range of intuitive 3d perception through head tracking and stereoscopic display that flat screens can’t do. With room-scale and hand tracking you get the space and manual dexterity to create 1-to-1, meaning that if you want to see something from a different angle or distance you just move your body or head. If you want to move or change something you put your hand out and do it.
This removes the cognitive load of managing a more abstract interface. It also leverages unconscious systems and abilities like spatial memory, so you can more easily slip into a flow state where both sides of the brain are in tune and working harmoniously through analytic and creative thought.
Do you have any future plans for Noda?
For the product, the plan is to make it more usable and useful to more people. That means improving what’s already there, specifically the hand tooling, and extending to support new workflows, most likely by adding types of media and information you can get in and out of Noda.
For the creative team, I’m looking to expand from ‘one’ to ‘some’. Visual designers/artists, Unity/C# programmers, and anyone involved in alternative business structures like crowdsourcing or software co-ops – I’d love to hear from you. (Visit noda.io to contact me.)
Thanks for talking to us, Brian!