Category: Uncategorized

Updates from Siena, Italy!

solomonsD

Hey Guys!

Hope everyone is surviving the semester well enough! I happen to be in Siena, Italy for the Computer Applications in Archaeology conference–which is awesome! I’m giving a presentation with Jeffrey Glover from the Anthropology department and Brennan Collins from the English department, discussing the collaborative effort of recreating a past landscape of Atlanta. I’ll be talking about the 3D Atlanta project for my part and showing off some of the SIF work we’ve been achieving.

Speaking of which–it’s time to update you guys on where we’re at with the 3D Atlanta SIF project. I finally finished my textured model of the Solomon Barbershop on 60 Central Avenue! It looks beautiful–to me at least. I had to brush up on some Photoshop skills to paint the texture on the building based on the old photograph. You can see the finished result compared to the original black and white photo I used as a reference.
solomon backdrop for model
solomonsB
The UV coordinates are far more organized as you can tell from the screenshot below, than they were before. There are still a few rough spots of misproportion, but luckily they are on small, barely visible faces on the model–crevices really. As I explained in previous posts, unwrapping a mesh’s UV coordinates is a complicated process of making sure things are proportionately the same on the actual image–otherwise you’ll run into problems of the same size bricks in a texture appearing to be different sizes on the model.
solomonsE
When I have time, I would like to create a ‘normal map’ for the model as well to help create visual depth, where the model lacks it–for instance small pieces of wood dividing window panes. As it stands now, I used some visual tricks with shading in the texture to achieve pretty convincing depth without a normal map, but these models and textures can be tweaked at a later date.
solomonC
Wasfi and Nathan are hard at work finishing their models as well and by the end of the month, not only will we have 3 complete models, we’ll have a fully interactive environment with a quick Oculus Rift setup for everyone to start trying out. I would like to implement the Leap Motion controller you saw in earlier posts about the Digital Signage project as well so that some of the interactions in 3D Atlanta can be touched with a virtual hand while wearing the Oculus Rift. Obviously this will change the way the user will interact with the environment, but we can figure out those details as we arrive at them! For now we are sticking with the “Point and Click” interaction theme. The other team members have put together some impressive information as well so stay tuned as I update you guys on that or read their blogs to find out more!

Cheers,
Robert

3D Atlanta Updates & Other Tidbits

3datlantapostA

Hey Guys!

Couple of new updates to share with everyone about the 3D Atlanta Project!

First off–in two weeks we should have 3 completed models of architecture to show off! Which is pretty exciting! Wasfi, Nathan, and I are hard at work getting them ready! The other SIFs are currently working on compiling their historical research into interactive pieces. What I mean by that is this:
If they find something on Coca-Cola from the 1920s that is relevant but maybe we don’t have enough information to fill up,say an entire booklet, we can simply hang up a flier on a building’s wall that a player can interact with. So the question then–is what is this interactivity supposed to look like?
3datlantapostB
After Discussing a lot of different options, we came up with the simplified point-and-click idea. The point-and-click paradigm is well known and intuitive. We aren’t trying to create a new paradigm of interactivity–rather utilize pre-existing paradigms of interactivity to change a paradigm of educational interest and engagement. So–“Point-and-click”–which some of you might be familiar with from the ubiquitous mystery search games that are all over app stores and Steam, or from certain classics from Sierra or Lucas Arts in the 90s (Secret of Monkey Island, Quest for Glory, etc.) and perhaps the more well-known Myst series.
3datlantapostC
In our environment–although it is a full 3D environment from a first-person perspective–the point-and-click interface simply means than when the player’s cursor,(which can be controlled from a mouse, or if the object is simply close in proximity to the player), hits an object in the environment that is interactive–it turns a bright red to signify the player can use it. Once used–the player’s controls and interaction are replaced with a new set of controls that allow a different kind of interaction with the information that is presented, like a newspaper or flyer. I’ve programmed a generic enough interaction script in Unity so that any object can be quickly interactive by attaching the script to it and linking the necessary information to be displayed.
3datlantapostD
One cool example is a 3D booklet that can pop up with animated pages. The background darkens, the controls now flip pages instead of moving the player, and a simple button press or newly determined method will make the information disappear and return the controls back to player movement to continue exploring.

I had some additional free time to play around a bit with the environment as well and created a generic vehicle script. Basically–I plot out a set of nodes and attach those nodes to any vehicle model–in this case a streetcar–and it will indefinitely move from node to node indefinitely on a circular track. Eventually we can program in more interactive actions–like a streetcar stopping at certain corners to pick up passengers, or stopping if the player runs in front of it.

In the next few weeks we are going to have something pretty spectacular to show off!

In other news about my goings on in the SIF program: I’ve managed to get consistent results on the 3D Laser Scanners in CURVE finally! I was able to get consistent results in the model before, but the color texturing was coming out mottled and…well bad. I found a new setting in a newer version of the software that seems to have fixed that issue so now all that’s left to do is write up a handbook so that someone coming into the lab can use it with simple to follow instructions(It’s already pretty simple to use but the GUI can be a bit intimidating at first).

I’ve also been spending a lot of time giving workshops and consultations to a creative writing class taught by Robin Wharton. Their projects involve taking objects from the MARTA collection, photographing them, making models of them, and producing creative writing on a narrative for the object. I’ve been showing them how to take photos for photogrammetry and running them through the many steps of processing the data. They have already produced some really awesome results and I’m excited to see what they all finish by the end of the semester! This means there will be plenty of objects to fill our Digital Signage project with that uses the Leap Motion!

I hope everyone is having a good semester–I’m working on my thesis as we speak so I guess mine’s going okay lol. Wish me luck!

Cheers!
Robert

New Semester–Same Exciting Projects

3d atlanta poster 1
Hey Guys!

*long sigh of relief*

Finally on the other side of a flu storm that started last week. The other good news is that I get to give you guys some updates on all the projects I’ve been working on.

Let’s start with the Digital Signage Project:
sifpostA
I turned in a final prototype for the Digital Signage Project to the Exchange to start the process of getting it on campus. It’s been a long road with that project but it’s worked out great! Both hands can be used to interact with the objects, but only the right hand can interact with the arrows that cycle through the different models. It will be very easy to add more models as we need them–I just need to drop a few files, add a few lines of code, and recompile the project to a flash drive. The colors and fonts used in the project are all official Georgia State University colors and fonts, and we even have our own official logo for the SIF program! If you guys get a chance, go play with this project at the Exchange and send in any feedback! It’s a basic prototype right now, but with time it can be expanded to do more! My main focus the past few weeks has been split between the 3D Atlanta project and the new NextEngine Scanners we have at CURVE.

NextEngine Scanners:

These things are great! They do a great job at getting small objects scanned in at incredible detail. They do have some issues with the color texture capturing however. The textures have zero color-correction done to them so the final textures often come out mottled–almost like a camo print of light and shadow is on top of it. Obviously that isn’t a great result and I’m currently working on ways to solve it–namely getting a small light studio setup to test out different lighting conditions. Pitch black doesn’t seem to work–so I’m going to try and do a bright environment. The problem is with the NextEngine’s internal camera flash–it’s very bright and I think what’s causing the drastic color differences between scans. In no light conditions–the flash is still too strong. My next attempt will be to put several light sources surrounding the object to create a strong ambient light level to offset the brightness of the machine’s flashes.

My goal with these scanners is to create a help file with Andrew and Nathan so that anyone can walk in and start using them by following a step-by-step guide that we’ve gotten all the kinks out of. Luckily the machine does a good job automating the process–it’s just a matter of tweaking a few settings.

3D Atlanta:
3d atlanta poster 4
3d atlanta poster 5
3D Atlanta is making some real headway this semester! A few of us are working on making actual models of the building facades we have! I’m nearly finished with my first one. The modeling is completely done–there’s only the texture left to do. I’ve already spent some time unwrapping the UV coordinates to decent pattern on the image–you can see the process in the screenshot: I have a colored grid for a texture that I slowly unwrap the faces of the mesh to, i.e., the rectangular 2d plane that is where a window is at on the building is mapped to a rectangular portion of the texture. It’s a time-consuming process, but it’s roughly half-done. While sick this past week I’ve been taking a break from that process–which requires a lot of mental focus–and dropped the model in Unity to start looking at ways of implementing interactivity with it–even without a texture. My next post will deal more with that aspect of it. We want the door to possibly trigger an interactive newspaper that takes up the screen, giving information or narrative about the building. A future prototype will be making the interior environments of the buildings–but that’s a ways away! For now we are focusing on getting a working prototype!

I hope everyone has had a good start to their semesters! Mine’s admittedly been a little rough, but I’m slowly getting back on my feet! Thesis defense is coming up fast and I’ve got a lot of work to do!

Cheers,
Robert Bryant

Oculus Rift — The Nausea Machine

oculus

Hey Guys!

Our Oculus Rift SDK2 kits are in! I spent a great deal of time with it over the weekend so I’d like to give a quick overview of what I’ve discovered:

1) It’s pretty difficult to setup. It took about 5 hours of fiddling with settings and looking online through forums to get this thing working properly–that being said–I was using my laptop and the SDK2 is not particularly fond of laptops with dual gpus like my own. Rather than use my Nvidia GPU, it will default to the integrated Intel GPU. This is a problem on their end. Regardless–I found a weird workaround to get this thing going! The downside is that I can’t mirror the goggle’s vision to my desktop–so you can’t see what someone is playing unfortunately until their dual GPU issue is fixed. Beyond that–learning the settings to adjust pupil distance, etc. is not particularly intuitive either, which is problematic because of the nausea that occurs if the settings aren’t tweaked properly.

2) Nausea, Nausea, and more Nausea. No matter how I tweak the goggle’s settings–some demos or games will consistently make me ill every time after about 5 minutes. Half-Life 2 is one of those games. Although it is stunning to explore–I get ill quite quickly and have to put the goggles down. I will continue to feel ill for about 15-20 minutes afterwards. It’s hard to pass it up though–nothing is quite as unnerving as walking up to a person in the Half-Life 2 engine and looking at them in true 3D–they look like a moving wax statue–yet are missing a soul–maybe it’s just me being metaphysical and weird–but it kinda creeps me out. They look real–and my brain gets confused. It’s a pretty awesome experience.

3) The head tracking is pretty awesome–the provided demo that comes with it will let you rotate your head around a branch to see its underside–it’s insane. You really do feel immersion–and get frustrated when you can’t grab things–believe me I’ve tried unconsciously reaching out to something to nearly hit my friend sitting next to me in the face.

4) It’s very lightweight compared to the older model. I think it weighs around 550gs–which is pretty nice.

5) The wires are crazy–although better than the first SDK. It requires a lot of careful stringing around, but no matter how carefully you run wires around–they inevitably get tangled in a complex assortment of attachments which makes moving it around quite hassling–especially when its moved, the wires are even worse off.

Overall–I’m pretty excited about using it with the Decatur St. environment and digital signage around campus. I don’t think we’ll get to it this semester unfortunately–although who knows–but soon enough–when we get the kinks worked out, like figuring out a method to measure people and create profiles before they use it to reduce possible sickness, we’ll setup some kiosks around campus for students to explore our SIF environments in awesome detail!

Cheers,
Robert

Another Post on Gamification!

octo

Hey All!

I’ve been working like crazy on comprehensive exams this week and thought I would share some of the latest dialogue on Gamification I’ve researched and written. I think it’s a highly relevant topic to a lot of contemporary research that has any public leaning–like our reconstruction of Decatur St project we’re still investigating and researching. Enjoy!

How Gamification has Transformed Web-Based Interaction: Black Hat vs. White Hat

The terms white hat and black hat originate from the hacker community. White hat hacking refers to those who break cyber security barriers for non-malicious reasons; testing internal security for vulnerabilities(Knight 2009)(Douglas 2010: 503), and sometimes extended into civil activism like leaking documents into the press. Black hat hacking refers to the violation of computer security systems for maliciousness or personal gain(Moore 2005: 258). The dichotomy is in the intention behind one’s action. The terms were applied by Yu-Kai Chou (2014) in his theories of gamification to mirror the intention behind its application. White hat elements of design promote engagement by letting the user express creativity, feel success through mastering the gamified application, and promote a higher sense of meaning—it fosters positive emotions. Black hat elements are those that demand user action from unpredictability of rules, fear of loss, or from the need for things given arbitrary value. The motivations to engage are still evident with black hat elements, but the end user experience elicits negative emotions.

Although Chou draws this distinction of good and bad motivating game design elements—black hat motivators are not inherently malicious—they are simply different sets of motivators. Black hat motivators play off negative emotions to force engagement and can be used in applications like phone apps that make the user feel anxiety over personal health—like applications that would help with smoking cessation, or improve diet through reminders of fear or things the user cannot have unless rules are followed done within an irregular time-frame. Chou argues for a balance between both white hat and black hate game design for a healthy and sustainable game or application of gamification. Black hat techniques might drive an initially large user-base, but sustained negative emotions will eventually drive users away once they are able to, because they will become exhausted at the feeling of having no control over their selves. White hat elements help sustain user interaction by relinquishing them control over their actions once initially engaged with the application.

These two differences of intention behind gamification’s goal of behavioral outcome change provide the framework necessary to look at case-examples of web-based gamified applications and how they have transformed the design behind them.

 

Case Studies of Gamification

            Tolmie et al (2014) recently published two parallel gamification focus studies that reflect these concepts of black hat versus white hat intentions. Their goal was to test the feasibility of encouraging engagement across a broad spectrum of potentially interested parties and stakeholders in the realm of e-government or e-democracy—online platforms of civic engagement. They recognized that information and communication technology is increasingly the platform where information about political issues and its debate are disseminated, fostering wider democratic participation and greater transparency and accountability in government policy and processes—a benefit to democracy and society(Tolmie et al 2014: 1763). They add an important caveat to this idea; the importance in considering how well the systems of online communication function to promote the more vital component of civic engagement—debate(Tolmie et al 2014: 1763). They explain this issue in how newer techniques are necessary to harness this newer technology. The reality is that people have moved away from consuming media through a single point of contact, i.e., news reports created centrally that are then sent homogenously to an entire population at specified times. It has moved towards distributing information to individuals using some form of a personal computer to consume information simultaneously and heterogeneously—or from varying sources(Tolmie et al 2014:1764). Debate has moved from sitting around the television and discussing with small groups to discussing in large-scale public forums, like social media.

They looked at two different gamified applications to look at user engagement with different sets of elements. Bicker Manor was an interactive game centered around scheduled debate between a hypothetical family that users interacted with both through a web interface and through SMS messages to their phones. It expressly sought to discover ways in which web-technology could be used to promote mass-participation in an event(Tolmie et al 2014: 1764). Their second case-example was a gamified application called Day of the Figurines—played by strictly sending and receiving messages on a mobile phone that interacted with an ongoing twenty-four hour a day small virtual town(Tolmie et al 2014:1765). The aim was to discern the difference of debate and interaction between the two games through the game design elements used in their production.

Day of the Figurines was designed to interrupt users’ routines—their daily lives, whereas Bicker Manor allowed users to easily manage their interaction with the game so it would not hinder their routines. Day of the Figurines used black hat elements of negative emotions and stress through temporally unpredictable messages that required responses to succeed, forcing the users to manage their interactions with the game as they came about—often leading to more interactions with others not playing the game who demanded explanations for the interruptions. Bicker Manor took a white hat approach where the rules were ordered and predictable, allowing people to integrate the game into existing routines of their choosing that did not disrupt their daily routines and demand explanation from others not playing the game(Tolmie et al 2014:1768).

The core black hat strategy used by Day of the Figurines is its structure and engagement mechanisms of unpredictability and sense of loss that enabled users to reason out adequate rationale to prioritize the games interactions over the daily required routines of their personal lives. This was successfully achieved through competition and that user would suffer negative consequences if a response was delayed by a short length of time(Tolmie et al 2014:1769). Bicker Manor was not designed to illicit those motivations—it was designed to replace the same daily routines through intrinsic motivation of the users own perceived value of interacting with the game—it was largely unsuccessful in comparison to Day of the Figurines, especially with regards to engagement with not only the game but with the real world around them.

In Day of the Figurines—users felt more engaged with the game because not only were they randomly interrupted by its requests for action, they were required to explain those interruptions in their daily routines to those around them who did not understand—disseminating information further and promoting the game and its engagement to others by simply engaging with it. When no one was forced to explain their limited engagement in Bicker Manor, because those interactions largely took place privately and non-obtrusively with little intrinsic motivation outside of an arbitrary point system, players became bored, one complaining: “it was more like filling in a questionnaire(Tolmie et al 2014:1769).”

These studies show that power of black hat game design over white hat game design. Although players complained that the content in both games was not motivating or memorable—they still engaged with the black hat designed game more(Tolmie et al 2014:1770). This being said, there are limits to the levels of disruption a gamified project should implement. Some unpredictability is good in that it forces interaction and engagement, but this is still an indirect form of engagement. How can engagement be intrinsic and direct? A balance between black hat and white hat design elements seems to be the answer.

Chou (2014) offers several examples of recent gamified services, some games that he has mapped between black and white hat game design element usage and the issues associated with a game being out of balance. Zynga games, the company behind Farmville, a popular Facebook game where one plays a farmer, largely works with black hat techniques where a user’s motivation is stemmed from the anxiety of real social pressure and perceived personal pressure to maintain one’s farm and acquire in-game currency only achieved through unpredictable or highly specific times of required interaction. The cost of not interacting is a sense of being left behind, and seeing one’s farm deteriorate. The engagement follows the same patterns of Day of the Figurines but takes it a step further in not only interrupting the users’ daily routines but also interrupting the routines of their social circle by offering incentives to directly solicit it through social media in a pyramid scheme of in-game currency accumulation.

To echo Chou’s point about the temporal instability of games focusing entirely on black hat game design elements: Farmville and its developer, Zynga, has been in steady, rapid decline since Farmville’s release in 2009. As of 2013, the decline was obvious after numerous employee cuts and their public share value dropping 70 percent(Bachman & Brustein 2013). Games and applications that have utilized a better balance between black and white hat design elements, Chou (2014) argues have had longer success cycles, including: Facebook, Twitter, World of Warcraft, and Candy Crush.

 

What do these Case Studies Teach Us?

When utilizing game design elements to further behavioral outcomes, it is important to understand the mechanisms of these elements and whether or not they promote positive and healthy emotional engagement. In context with the Phoenix Project’s goal to apply gamification to the online database of archaeological and historical material with the end goal of achieving civic engagement—it is not only an ethical requirement to responsibly use a balance of white and black design elements to promote engagement and not addiction—it is also a requirement to use a balance to achieve a long-term sustainable community of users. Engagement cannot be civic engagement without sustainability and a framework that allows the users to exercise their own agency. Black hat elements that promote accountability and interaction through social pressure are integral, but should never override a user’s agency—it should task the user to implement it and provide the white hat elements of ownership, accomplishment, meaning, and empowerment for their agency to engage with.

 

Yu-Kai Chou 2014 http://www.yukaichou.com/gamification-examples/octalysis-complete-gamification-framework/

Justin Bachman and Joshua Brustein

http://www.businessweek.com/articles/2013-06-04/a-short-history-of-zyngas-rapid-decline

Knight, William (16 October 2009). “License to Hack”. InfoSecurity 6 (6): 38–41

Fun With Digital Signage

UntitledHey all!

Just wanted to give a quick update on what we’re doing with digital signage. In addition to trying out the iPad portal idea–we’re looking into making LeapMotion controllable screens where students can interact with 3d scanned objects from the MARTA collection housed in the Anthropology Department.

It’s a little rough at the moment–but through Unity I’ve built a test run that’s working pretty well. Two hands enter the screen to manipulate the object with realistic physics. We’re using a 1920s whiskey jug at the moment–luckily I can’t break it in the virtual world, because I’ve dropped it multiple times. Using the LeapMotion is a bit of a skill in itself–albeit fun to learn.

Later this coming week I’ll give more detail and some screenshots of a more finalized version!

-Robert

The Weeks Just Keep on Getting Busier!

agisoftdemo

Hey guys!

This was another pretty productive week! Andrew and I ran two workshops in how to use Agisoft PhotoScan. The first workshop had no turnout unfortunately–but our second one this past Friday had a few very interested and excited people come along. I explained how the software worked and showed some examples of running through the workflow of building a 3D model based on a set of photographs. It’s a pretty awesome software package–but also needs some finesse i understanding the settings to get better results. These settings are key, because a single set of photographs has the potential to have great alignment–or not–all dependent on which settings one uses. I beseech someone to come out to the next set of workshops we hold this semester! We have the software installed on all out computers and that means we can start doing a lot of on-the-fly modeling in the workshops with various groups working at different workstations!

Next week my goal is to finish figuring out how to create a 3 cube based on a list of points rather than just a 2D plane–I’ve tried and failed a few times already so I have to go back to reading up on the workflow surrounding the triangle stripping. I’ll be excited to share with you next week what I figure out! This will help get our buildings accurate in in the 3D reconstruction of Decatur St–because I can start inputting accurate measurements for buildings that don’t follow a strict right-angle cubic polygon–which is all the buildings and sidewalks. We have another meeting scheduled with Michael Page from Emory coming up as well to start learning how GSU and Emory can team up to get this project running faster.

The last thing I want to mention is digital signage and some cool ideas we have about it for our campus. We’re in the process of linking two iPads together through a live video stream with the goal of creating a ‘portal’ around campus. One screen may be set in the student center, while another might be set in the plaza. This will allow students to see one another an interact from different places across campus in a novel little portal-like window. If the venture goes well–we may add more so be on the look out!

Cheers,

Robert

What a Long and Great Week!

3dtitle

Hello All!

As you can probably tell–I’ve spent a great deal of time this week hacking my Edublogs WordPress CSS. It involved opening up the source code in Firefox’s debugger and figuring out all the various tags, <DIV>’s I could change around–it was surprisingly difficult and time-consuming. Often, simply targeting id’s or classes wouldn’t work on overriding the built in CSS and required strange workarounds. I still couldn’t find a way to change the background color that actually worked; I had to change it in the actual WordPress dashboard–which let’s me change the background color of 3 things–the background one of them. There are still a few issues I’m working through today–some unexpected side effects. The z-indices are breaking some of the <a> links(they’re hiding behind a layer that I’m still attempting to figure out). Hopefully it won’t take me too long today–but I want to get this finished today. Why? Because I need to work on other things that are more long-term important–like the 3D environment, which has seen some pretty good progress as well.

Brennan, Alexandra, Thomas and I had a pretty great meeting earlier with Michael Page from Emory who is already working on a reconstruction of Atlanta in the 1930s on a much larger scale. It will be great to share resources with them, and our project will dovetail nicely, because as it turns out–we’re using the same engine, Unity, to build and environment. We figured out some obstacles and challenges in our discussion that I think can be overcome. 3D modeling is one of them. I’m unfortunately not very good at 3D modeling–I can do it–but it would take me a week to do what would take someone more skilled a few hours. We also have to think of 3D modeling in terms of available hardware resources–Attempting an exact replica of architecture down to the finest detail can be both time-consuming and unusable. Unusable is a pretty strong word so what do I mean by that? I mean that if we have a city block filled with high polygon buildings(in the millions), no one can actually interact with it–arguably even on a powerhouse computer. It could be rendered frame by frame into video–but real-time rendering is problematic. Video game development solves this problem through approximations and tricks that use textures with bump mapping to emulate 3D detail that doesn’t actually use more polygons. So–what is the point behind me giving this caveat? We have to take the extra time to figure out how to best represent the buildings using as few polygons as possible so that the program can be as accessible to as many people possible who have varying hardware configurations. This process takes a little more time and involves software like GiMP or Photoshop to make the textures and bump maps. We have the option of ‘baking’ light maps as well. This is process of artificially lighting a texture as if a light source were rendered on it–but not actually using a light source, which takes more hardware resources to process.  These light maps are normally best for static environments where light sources don’t change–ie. an environment with real-time sun rendering that changes shadows over time–which I don’t think we’ll need.

I wanted to touch on the ‘science’ behind 3D modeling this week for the blog as well to share what I’ve been researching and playing around with. I have very little experience in true 3D programming, which differs from 3D modeling in that one is procedurally generating a model through code rather than building one in a UX environment in 3DSMax or Blender. There are three main components to a 3D object: vertices, triangles, UV points. I think faces could be included–but usually these are figured out through the triangle-stripping. So let’s start with vertices:

uvplaneI made this image to help visualize the process going on–don’t be afraid of it. We have 4 vertices, ABCD. Vertices are pretty straight-forward. They can be 2D or 3D. In this example–we’ll make them 2D because it makes explaining it a little more easily. We’re working with what’s called a ‘plane’ or a flat surface in 3 dimensions–that means, if we remember geometry, that the height of the object is ‘0’. The X and the Y coordinates of the object will change but there will be no extension in the the 3rd dimension–the Z value will remain 0.  In theory–we could simply make a 3 vertex triangle–but a square plane helps show the process of triangle stripping.  We need 4 vertices –but vertices alone tell us nothing about the object we’re making–how do these vertices connect? It might be easy to assume with 4 vertices–but once you’re dealing with a few thousand, there are hundreds of ways various vertices could be linked. The next step is telling the model–through triangles, how the vertices connect.

We can only define triangles in 3D programming–not squares. One could define a square as ABDC, but in programming we use the smallest polygon available–a triangle. So how do we represent a square with triangles? Two right-isosceles triangles. So let’s look at T(triangle)1. We could label it as ABC, BCA, CAB, CBA, BAC, or ACB. How are we going to choose the order of the vertices? It actually makes a big difference: the ‘face’ of a polygon is determined by a clockwise order of vertices. To illustrate: if I look at this plane head on like the picture: two things can happen–either I see the front of the plane(a face) or I see the back of the plane(it isn’t visible). Faces are the visible sides of a triangle(s). The order of he vertices is how the model determines what side is the front and which is the back. If I ordered the vertices as CBA, BAC, or ACB–if I stare at this square–I couldn’t see it. It would exist but the face would be facing opposite of me–I would have to rotate the object in 3D space to see the ‘back’ of it which I made visible instead of the front. By using a clock-wise ordering of the triangle–I’m telling the model: “This is the front face of the triangle.” So we could order the vertices as ABC, BCA, or CAB. That’s 3 ways–which one should we use?

This gets us into triangle stripping–triangles attached by joined sides. We obviously want to join the BC shared side–because we can’t join it any other way and the model wouldn’t work right(I believe–I’m no expert but I’m under the impression it’s impossible to not join a usable side). So we now known we have 2 triangles that can be ABC, BCA, CAB  for T1 or  BDA, CBD, DCB for T2.  How do we link these two triangles?  What if we do this: CAB, and CBD.  Does it work? Yes–because CAB and CBD share the same 2 vertices and both follow a clock-wise ordering.  When the model reads these two triangles that both contain 3 vertices–it will strip them together into the shape of a square plane. So this is a pretty basic, hopefully not too complicated way of showing a 2D plane–how do we texture it now? That’s what UV coordinates are for(not Ultraviolet =P). They use the same vertices but tell the model how to apply a texture on the triangle faces. Let’s say this square at full scale represents 40 pixels on a screen–we could make a texture that is 40px x 40px and apply it to the mesh(the model) but the mesh won’t have any way of knowing how to apply that texture. We also have to tell the mesh how to apply a texture.

Surprise: UV coordinates also need to follow a clock-wise motion. So how would we define the full 40px x 40px texture on our plane? We use the point ordering: ABDC. Why start with A? it depends on how we program the vertices into an array–in this case, I listed the vertices in order ABDC when making the plane–therefore the UV coordinates need to match that order. This can seem confusing because the triangles aren’t using all 4 vertices–they only use 3–so where do I get this order from? I initially–in code–fill an array(a sequential list) with my vertices. I chose to store them in the ABDC order. When I define triangles–I’m using the same vertices I already defined in that array. UV coordinates also use the same vertices, and follow the same order in the array. If you feel lost on that–don’t worry and welcome to my world. I’ve been wracking my brain for a few weeks now trying to apply this information to 3 dimensions and feel like I’m losing my mind–I’m nearly there though. So the last thing about UV coordinates–how does it know to use the outer 4 corners of the texture? It doesn’t–you also have to define that(nothing is easy in this process lol). A UV coordinate set uses a percentage to define the area of the image.  So if we have a square that is 40 x 40 units our coordinates would be ABDC: (0,40), (40,40),(40,0)(0,0).  These won’t work for UV coordinates since UVs are based on a proportion of the image(this will make more sense in a second).  Remember when I said I added ABDC in a sequential array(list) ? That means A = 0, B = 1, D = 2, C = 3. We start with 0 in programming rather than 1 which can be confusing, but in our array, these 4 points will always correspond to this exact order. UV coordinates rely on matching this same order but use a ratio. If I wanted to use the whole texture–I would define 4 UV points: (0,1),(1,1),(1,0),(0,0)   — if you think of the 1  as 100% and 0 as 0%  and the point of origin as the bottom left coordinate(it always will be the bottom left corner in UV textures)–we might be able to see what’s going on here.

uvplaneB Basically–the orders have to match because the first point in the UV list has to correspond to the first point in the XY coordinate list and for every other corresponding point. The logic behind this follows as: For point 1 in the XY list, set the top left (100% up in this case) of the image to that coordinate. For the second coordinate (B), set the top right (100% to the right and up)  of the image to that coordinate. For the 3rd point (D) set the bottom right (100% right and 0% up) to that coordinate, and for the 4th point, set the bottom left corner of the image (0% up and 0% right) to that coordinate. Using %’s for UVs let’s us customize things a bit–like if I wanted to only us the bottom left corner of my image I could set my 1’s to 0.5s(50%) and it would stretch the bottom left quadrant of my texture to the entire plane.

So that’s my knowledge of 3D programming in a nutshell at the moment–which is limited but the time I’ve invested in going deeper into this for the past 2 weeks, I think, deserves some blogging about to tell everyone the process we’re using for our 3D environment of Decatur St. Given–we will be using 3D modeling software more than this  coding–but having a deeper grasp on the math and coding behind 3d models allows me to better understand and customize the environment for our project. If you have any other questions about this topic–feel free to ask! Sorry for such a long post, but I hope it’s interesting and explained well enough for others to understand–it definitely took me a while at first.

Cheers,

Robert

3D Reconstruction of 81 Decatur Street Project

New_York_Cafe_for_Colored_62_Central_Avenue_SW_Atlanta_Georgia_circa_1927

Hey Guys!

Just wanted to talk a little this week about the project some of the SIFs and I are working on! It’s been a rough week due to a stagnant cold that’s been ailing me, but we’ve managed to accumulate a lot of good data to throw at this project. What is it you ask? Well–every time you happen to walk by Classroom South on Decatur St, there is actually a rich history swept underneath the building that now stands there, namely 81 Theatre. It started as a Vaudeville stage and slowly evolved into a popular African American theatre in the 30s onward. The street was bustling with activity–pool halls, barber shops, clubs…it was an extension of the Auburn Avenue community’s spark.

So our team wants to rebuild this block of Decatur St as an interactive environment–a game of sorts. If you’ve been reading my blog up until this point, you’ll know how into gamifying experiences I am. By adding a layer of engaging interactivity to this historical environment–we hope to promote education through engaging experiences that reflect what the different departments we represent do best: English and Literature, Anthropology, Geography, and Computer Science. It’s one thing to build a historically accurate 3D environment through maps and computing–it’s another thing entirely to fill it with narrative and meaningful culture that grabs attention and keeps it engaged. Can someone interact with this small microcosm and leave that interaction knowing more about the past than they realize? It’s hard to say, but I’m confident we’ll do a good job between CURVE and the Exchange’s combined resources.

3d model screenshot

Here’s a screen grab of the very primitive block I’ve started building. The placement textured facades of buildings have exact widths that are historically accurate. I’m working on the heights. How am I getting this information? Glad you asked. GSU’s library has a vast, untapped resource of historical maps and photographs from the 20s, 30s, and 40s. The maps are various city planning maps that contain incredible details: sidewalk widths, facade widths, interior measurements, occupant business names, placement of fire hydrants, sidewalk materials(tiled or granite), street car lines, and much more. Combining these maps with the numerous photographs we’ve already found–we have incredible historical detail to work with.

We’ll be sure to keep you updated on the progress we accomplish! It’s going to be awesome.

Cheers,

Robert

Gamification Part 2: How can it promote education?

braid

Hey Everyone!

This week has flown by! We’re definitely making some headway at CURVE with one of our interactive environment projects–which I plan on posting more details about in my post next week. It’s pretty exciting and involves a tremendous amount of data that’s available–like maps showing the widths of sidewalks, streets, and building facades–as well as some interior measurements on the old Eighty-One Theatre. It’s grave lies beneath our very own Classroom South. For this week I just wanted to wax poetic on some potential applications on gamification in education and how to use it to promote projects.

It’s definitely easier to talk about gamification in the context of video games–the whole point to a video game is creating a gamified experience, and anything else is there to support that function–whether it’s sound, visuals, narrative, or novel controls. Jonathan Blow is an independent video game developer who started with a game, Braid, that was wildly successful for an independent release. His immediate critical and public acclaim allowed him to start speaking publicly about the video game industry and its inherent problems–some ethical.  Here’s a video below–I welcome you to watch the entire presentation, but he only begins discussing the process of gamification starting around 50:00.

The point Blow makes about modern game design–especially for social games, like Farmville–is that people are being ‘tricked’ into playing simplistic games.  The term ‘tricked’ has a negative connotation, but it’s applicable. In the case of gamifying a process of data mining that people voluntarily enter into–Farmville–the designers are the ones who are actually farming–the players are clicking on the picture of a cow over and over again. Whether or not an activity is fun to perform–it doesn’t mean that activity is healthy–physically or mentally. My goals with applying gamification to education or an online database of archaeological material is to promote mental health and engagement with shared heritage. I want to put a $9.99 price tag on something engaging–not something that I solely profit off of without regard to the buyer. Ethical gamification is using the same psychological manipulation to create engaging experiences–triggering our involuntary desires.

What are these desires? Games are competition. Competition does not imply competing solely against another–it also involves competing with one’s self. Activities that test our limits are inherently attractive, “Can I do this?” Gameful activities offer a ‘safe’ place to test and push our limits–privately or publicly.  To apply this to the Phoenix Project at GSU–I have to ask myself: How to I foster an environment of self and public competition that is meaningful, engaging, and beneficial to the individual and the others involved?

One idea is to allow the users to create their own interpretations of archaeological data, and those interpretations–which are showed along side the ‘official’ interpretations–can be voted on by the community and discussed. Logging users’ ‘achievements’ are incredibly important–allowing people to re-interpret artifacts is a heavy one. Beyond that–a user’s logged hours, amount of comments, amount of answers provided, all aspects of their activity are measured and provided to them to challenge and reward them. Something as simple as weekly competitions between users to measure who identified as many artifacts as possible, or find as many errors as possible–this is engagement. It uses the same tricks of creating an engaged interaction and applies it to something arguable useful and beneficial.

I’ll end with this claim: I genuinely believe that gamifying activities is not only a good idea–it will be required by the end of the decade to achieve engagement. I say this, because the advertising industry and media industry are already using it to control our interests. It works. The only way to compete with this slow indoctrination is to counter it with educational interests that utilize the same concepts. This is certainly a heavy-handed claim–and I make it seem apocalyptic–which it isn’t–I simply mean that the processes of learning and engaging communities, which I think already struggles, will continue to struggle until they begin to utilize the same tools as other industries to engage with audiences.

I’d love anyone to comment and discuss the topic! Until next time!

-Robert