21 Apr Layering of Realities: VR, AR, and MR as the Future of Environmental Rendering
Layering of Realities: VR, AR, and MR as the Future of Environmental Rendering
Working remotely throughout the past year has accelerated the introduction of new approaches to real-time rendering, and with it, a new necessity was born: how can a person feel physically present inside a space, without actually being there? Ultimately, designers resorted to the virtual world, a vast realm of interactive built environments that can be accessed from the comfort of one’s home. Even the tools utilized, such as headsets and goggles, have become more accessible to the vast majority of the public and are being sold at a lower price than they initially were. We have become accustomed to build, modify, and navigate between different environments, going back and forth between what is real and what isn’t. Truth is, virtual has become the new normal.
Virtual reality, augmented reality, mixed reality, real-time rendering, walkthroughs… Several different visualization methods that serve a relatively similar purpose. The difference, however? Accessibility. We are no longer separated by a glass screen, we are in the screen, fully immersed into the project’s fabricated realm. The reason why VR and AR are intriguing is because they parallel the movement of people in the real world, allowing them to explore the virtual space intuitively without complicating or confusing the process.
In theory, Virtual Reality (VR) and Augmented Reality (AR) are more or less the same; they both serve the same purpose of immersing in virtual environments. However, AR sits more near the realistic margin, as it simulates fabricated objects in the real environment. VR, on the other hand, is the creation of a completely virtual environment. Mixed Reality (MR) is the hybrid of both.
As complex as VR/AR may seem, but the use of computer technologies to create simulated environments has helped accelerate and facilitate the process of architectural visualization. Instead of taking weeks – and often months – to build physical models and replicas of structures, these interactive interfaces established a new design language, one that bends the rules of physics and allows people to be present in the project, observing, hearing, and experiencing its architecture.
As for the future? It is evident that we will juggle between one reality and another effortlessly, reconnecting with distant people and exploring places virtually; time and place will become nothing more than mere concepts.
In an exclusive interview with ArchDaily, architectural designer and VR/AR developer Bryan Chris Brown shares how his architecture background helped pave the way into the world of VR/AR, the process of creating a virtual environment, and what the future of environmental renderings looks like.
Bryan Chris Brown: My first job in architecture was interning at GWWO.inc/Architects. In high school, I remember them walking me through the design process for a school building and at the end of the presentation they showed me the full animated fly through with realistic people, water fountains, clouds, I remember being awestruck and thinking “Wow! They made a Pixar movie for a building! This is so cool!
BCB: After attending Morgan State University for a few years one of my biggest frustrations was the amount of time it took to make these amazing images and renderings. One day when I was rendering an image that had a time remaining of 5 hours I decided to play some Minecraft while I waited for the computer to finish. I realized that games like Minecraft can render multiple images per second, and that my single image was taking five hours to render a single frame! So I put down the game and started looking into game engines like unreal and unity. Within a few hours I was able to import my building into unreal engine as an FBX and give it super basic materials and walk around it. I immediately saw the promise of never having to wait for a render again.
Shortly after picking up unreal I was able to get a position at Highrock Studios working with the HTC Vive on a project for Hatteras Yachts. It was incredibly daunting at the time, especially with unreal not having yet added full VR support (VR was added in UE4 4.13, we started the project in UE4 4.12) I was able to get VR integrated into the project through a lot of effort and we were able to make the project a huge success!
VR helped me visualize my spaces in a way I didn’t realize was possible, I was able to see exactly how cramped the rowhouse I was designing would feel, or how large and spacious it’d feel if I made the ceiling just a foot taller. The biggest drawback of VR for me was not being able to share it, as a social experience, with the people around me. But with the advent of AR technologies you can have a shared experience with virtual content which is truly incredible.
DS: Can you briefly explain the difference between AR, VR, and MR? What are the different tools used during the process and what is each one mostly used for?
BCB: So I like to think of it like this:
AR: Augmented Reality: overlays content on the real world. Virtual content is rendered on top of the real world.
VR: Virtual Reality: the real world fades away and you’re immersed in a new, virtual world.
MR: Mixed Reality: Virtual and real world items mix, they share the same lighting, cast shadows on one another and in general become harder to distinguish
DS: Rendering evolved from hand drawn sketches to fully immersive walkthroughs. Recently, a few architecture firms create VR sets to showcase their projects and give clients a feel of the space without actually being there. Many are saying that this is the future of rendering. What do you think about that? Do you see it as an added value to the project or are animated walkthroughs enough to give the client a sense of the space, especially since it is costly and time consuming to make?
BCB: think there’s a lot to be said about the state of “real time rendering”. There’s a clear benefit to having work be in real time and being able to take a photograph of it instead of having to wait a few hours and some photoshop later to have an image. However, in the current state of the technology, ray traced or traditionally “rendered” content will have a higher quality than real time content. This gap will continue to diminish overtime, especially with Unreal Engine 5’s new Global Illumination feature and Nvidia and AMD’s real-time raytracing technology.
BCB: It’s definitely an added benefit to have real-time experiences and walk throughs for a project, especially for the earlier design phases where having everything be pixel perfect is less of a concern. Being able to quickly import your model or building concept into unreal engine and walk through it, seeing the impact of your design choices during the design phase is an absolute game changer.
DS: Do you see a difference between when you got your architecture degree and the way the program is being given now? Do you think technology is aiding it or making it lose its fundamentals?
BCB: The fundamentals of architecture come down to form, space and order. Mixed Reality encompasses a new way, and new methods of designing those spaces. Of course new students will need to learn the “basics” and foundations to architectural design, but there’s no reason why they couldn’t learn them in Mixed Reality.
DS: Have you ever built an architecture project in VR? If so, what was the process like and what were the elements that you mostly focused on having in it?
BCB: I’ve designed a handful of projects in VR, tools like tiltbrush make it especially fun to sketch in virtual reality. There’s still a long way to go with software development before we can start doing schematic drawings in VR, but for concepting and visualizing the volume of space there’s no better alternative.
DS: What architecture project do you wish was created in VR?
BCB: I’d love to see scanning technology like Matterport be used to preserve architecture that’s scheduled to be demolished or rebuilt/renovated. There’s a huge opportunity to archive and collect the state of the world as it is now, and how we’ll be able to remember it for the future. Imagine being able to walk into the Parthenon, with all the detail captured as it was when it was brand new. It’d essentially be time travel.
DS: How many people are involved in a project and what is the process usually like?
BCB: At Highrock we had anywhere from 2-5 people working on a project at a time as 3D artists/developers. Of course there were other architects and designers involved as with all projects. The process usually started with receiving a Revit model in the form of .FBX from a client and then going through and removing all the unnecessary things for a VR experience, like bathroom fixtures in the case of a stadium/residential building. This helped make the experience easier to run on a multitude of computers and platforms. Once we optimized the model and removed any artifacts and fixed up the geometry we’d import it into unreal using Datasmith, and then apply realistic materials from our library or from Megascans which are high quality scans of real world materials. This process usually had a few back and forths, but we set it up in a way that would allow us to update the model mid process in case some last minute revisions came in from the architect or other stakeholders.
DS: Considering the expenses of creating a VR set, how attainable do you think AR/VR will be in the near future?
BCB: So creating a VR set is actually not that different from creating an architectural rendering, and with tools like Datasmith that allow users to import straight from Revit or their design program we’ll see the cost continue to fall overtime.
DS: Do you think gamification is important in the architecture field? Why or why not?
BCB: There are certain aspects that are worth gamifying, for example what if you could increase engagement for an architectural review by giving people a game download that they can walk around and experience. That’s certainly valuable and a great way to gather feedback from the community where projects will be built. Imagine being able to show students at a school what their new building will look like in a way that’s super familiar to them and through the virtual interfaces they already use everyday.
DS: What is your take on digital environments inside the academic landscape, is there a place for it inside Architecture schools, or do you think it will keep being a part of Computer Science?
BCB: I think there’s absolutely a place for digital environments inside the academic landscape. Especially as computers improve, image being able to simulate an earthquake, or a fire and see how your design decisions impact safety of the potential occupants of your building. The impact of these technologies go way beyond just visualization.
DS: And finally, as time moves on, do you expect AR and VR to become a technology of common use? And in which way do you think we could use it in an everyday scenario?
BCB: Absolutely. I really like Ultraleap’s vision of mirror worlds, I think there’s a possibility of having this technology change the way we communicate and interact with one another.
Bryan Chris Brown has worked on numerous VR/AR projects, such as the MIT reality hack, a 5-day event of technology workshops, talks, collaborations, hacking, and more, as well as a project created with Highrock at the B&O railroad museum in ellicott city, which utilized AR to annotate a physical model that was built of the B&O railroad.
This article is part of the ArchDaily Topic: Rendering, proudly presented by Enscape, the most intuitive real-time rendering and virtual reality plugin for Revit, SketchUp, Rhino, Archicad, and Vectorworks. Enscape plugs directly into your modeling software, giving you an integrated visualization and design workflow.’. Learn more about our monthly topics. As always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us.