
The Museum Project has given me a brand new appreciation for curators and their work. To physically display an object, especially smaller items, one has to consider lighting, position, and other variables to highlight different details or features. Sometimes the ability to display details is through distance. Through this project, I fully understood the benefit of modeling an artifact digitally. The first and more blatant feature is the viewer’s ability to zoom in and out, pan, and rotate the item, giving them a complete view. This, combined with depth mapping and textures taken from focused images, allows fine details from the original object to be seen more clearly. Also, because of theoretically even lighting, there are no glares or shadows; colors are just like the original. I said “theoretically” because if there are glares in your source images or the scanner does not get a specific part of the item, the textures will look blurry, but the item will retain its shape.
When working on this project, one of the first things that came to mind was the movie “Night at the Museum.” With this technology and movement mapping, we could be able to animate real-life artifacts, making the museum experience even more immersive.
Our project as a whole is going swimmingly. While it took nearly half a day for our model to render, we spent that time setting up the Omeka page, and the model result looked pretty good. There were barely any particles, and slicing the unwanted stand sections was manageable. The only issue we had was working around some glare created by the bright exhibit lights. This led to the model being slightly overexposed around the shoulder, smoothing the textures. Overall, I loved this project, and I can’t wait to see the rest of the class’ models!
When working on this project, one of the first things that came to mind was the movie “Night at the Museum”. With this technology and movement mapping, we could be able to animate real life artifacts, making the museum experience even more immersive.
Our project as a whole is going swimmingly. While it took nearly half a day for our model to render, we spent that time setting up the Omeka page and the model result looked pretty good. There were barely any particles and slicing the unwanted stand sections was easy. The only issue we had was working around some glare created by the bright exhibit lights. This led to the model being slightly overexposed around the shoulder, smoothing the textures. Overall, I loved this project.
2 thoughts on “Reflective Blog Post”
Comments are closed.
I love your idea that by using photogrammetry we could animate the objects—it would be really cool to see even if it didn’t have much to do with the history. I think we might have had some slight problems with glare as well because in some places our texture is a bit fuzzy. I think if we were all to retake our pictures we might want to reposition some lights to make sure that no points on the object would have glare.
I also love the idea of using photogrammetry to bring even more depth to museum experiences. It makes me wonder how virtual reality could also be used to heighten that immersion.