location based services
Fundamental Technology and Business Drivers
By Fred Limp
Fred Limp is director, Center for Advanced Spatial Technologies, University of Arkansas; e-mail: fred@cast.uark.edu.
This article is the first in a three-part series of visualization features looking at this phenomenon in considerable detail, asking such questions as “why now?” “what does it mean for our traditional modes of business?” and “what does the future look like?”
ames Cameron will be doing all of his new movies in 3-D, Google Earth is front-page news, Microsoft has responded with local.live.com, and on it goes. There’s a lot of “buzz” around 3-D right now. Some of it is “sound and fury signifying nothing,” but some of it reflects a growing fundamental change that will affect the structure of the geospatial community.
Although geographic exploration systems are generating the most talk, there are parallel and, perhaps more important, fundamental developments across the entire geospatial realm. More importantly, there are collateral changes in what have been separate areas, such as architectural design, community and military planning, and simulation and online games (especially massively multiplayer online games (MMOGs)).
And although there’s a lot of “eye candy” flashing about, there also are real changes lurking in the background. For many current GeoWorld readers, the future will be quite different.
One of the major reasons that there was so much money sloshing around during the dot-com era was famously due to “irrational exuberance,” but real value was created and taken by companies that were reducing inefficiencies in many traditional business practices. A similar opportunity is emerging as 3-D approaches are applied to a wide range of areas. This first article looks at the fundamental technology and business drivers behind such changes.
lHigh-resolution ortho-imagery (left), even when viewed in a geographic exploration system, requires considerable expertise to understand. Oblique photography (right), although less “active,” is easier for non-specialists to understand.
What 3-D?
There are many types of 3-D with many objectives, ranging from simple visualizations to complex queries and analyses. In fact, most common GIS “3-D” data aren’t that-they’re actually 2.5-D.
In the planimetric (traditional map) world, each x,y location can have a z. A road can go up and down hills, for example. A set of (usually) inferred x’s and y’s can be created, and these can be used to create a grid of values to represent elevation. Images or other data can be draped over this, and, with such properties, 90 percent (perhaps) of all the 3-D information in the geospatial community can be described.
A “real” 3-D system permits multiple z’s at any x,y location, but it has more than that. It’s important to realize that simply storing an x,y,z location is necessary, but it’s not sufficient for a true 3-D system. In a 2-D or 2.5-D GIS, users can ask “what’s the area of the intersection of farmer Brown’s field and the highway right of way?” or “how many houses are within 100 meters of a pipeline?” Also, they can look at (i.e., fly around) a building from all directions.
But in 2-D and 2.5-D, users can’t ask “what’s the volume of the polyhedron created by the intersection of a spill plume and an aquifer?” or “how many computers are below the water pipe in room 203?” and they can’t actually enter a building in a visualization.
The data types that make 3-D analytical and visual operations possible are different from, but related to, those of the 2-D world. Like planimetric geospatial data “fields” (i.e., grids/rasters), TINs and “vectors,” true 3-D data come in three basic forms: voxels, tetrahedrons and “the other kind.”
Voxels are the 3-D version of pixels or raster cells, and tetrahedrons are the 3-D equivalent of TINs. The “other kind” is the 3-D equivalent of the 2-D vector, and it’s usually described as a boundary representation system (B-rep).
Similar to how 2-D has points, edges and faces (i.e., points, lines and polygons), the 3-D B-rep world has points, edges, faces and “solids” or “volumes” (different systems use different terms). A wall of a room would be a face, and the envelope that defines the entire room is a solid (or volume).
There’s ongoing debate about the best way to organize such a 3-D B-rep data structure. Through the years, there have been several commercial geospatial systems that use voxels and perform analyses on them, and I’ll look at some specific products later. At this point, however, there are no major “vector” commercial systems that have implemented a true 3-D data structure and/or analysis system.
In addition to the basic data structures, there are key additional issues in how 3-D data are indexed and retrieved. A common way to index planimetric data is the quadtree, and the comparable indexing structure in 3-D is the octree. Interestingly, the latest release of the Oracle database (10iR2) added support for 3-D octree indexing. (Do they know something we don’t?)
lStoring all LIDAR returns in a database allows a range of analysis options based on return properties.
Why 3-D?
Why is adding 3-D so “cool?” This has a lot to do with scale, which has a lot to do with abstraction, and abstraction has a lot to do with experience and understanding.
So what does this mean? Almost all geospatial efforts involve reducing real-world phenomena to abstractions. The state of Arkansas, for example, becomes a block of color surrounded by a wavy black line on a map. Arkansas goes from a real thing to an abstraction. People then have to be trained to understand that the color and the line on the paper represent a whole state-it’s an abstraction that’s not natural.
So at a map scale of 1:1 million, Arkansas is a blob of color a few inches across. Change the scale down one magnitude to 1:100,000, and add a piece of information (such as the land use derived from Landsat analysis), and the single blob becomes a quilt of multiple colors a few feet across. If the colors have been picked well, then perhaps people will realize that red is high-density commercial land, green is forest, and shades of brown are different crop types. But they’ll probably need a legend, because it’s still an abstraction.
If an image map is placed over an elevation surface, and people are allowed to “fly” around, then they’ll quickly realize that the mountains are covered in trees, and the soybeans (a pale brownish-green color) are in the flat parts.
Now move down again one magnitude (1:10,000), and put DigitalGlobe QuickBird imagery or local high-resolution photography on the elevation data. Then people can see houses and roads-they’re still abstractions, but many of the larger real-world objects look the way they really look. There’s less abstraction, and less training is needed to understand the data. As the level of abstraction is less, the number of those capable (and interested) in looking grows.
Business and Philosophy
For a business, less abstraction means a larger market. Although things look more realistic at these scales of presentation, there’s something else going on. We’re still looking down on the world, and most people don’t experience the world from above.
There are some interesting philosophical issues here. For example, Antione de Saint-Exupery said, “A person taking off from the ground elevates himself above the trivialities of life into a new understanding.” Perhaps, but it’s also possible that small-scale 3-D views and abstractions can isolate us from the reality that maps and images represent, providing an unsupported sense of omnipotence.
Some argue that a lack of real, human-level detail in many community-planning maps are a root cause for many bad planning decisions, because decision makers are working with abstractions that don’t represent the situation’s reality. Wouldn’t it be ironic if maps were the source of many problems, rather than the basis for their solution?
Where representations begin to approach the actual world is down at least two magnitudes from the 1:10,000 scale-about 1:100. If data are properly represented at this scale, building facades, streetscapes and vegetation can appear realistic, and there’s little need for abstraction. Things are as they appear. With a full 3-D data structure, a building can be represented as viewed by a pedestrian at eye level.
The effect of scale, or level of detail, is shown by the buzz associated with Google Earth. The fundamental technology represented by Google Earth has been around for several years with the earlier Keyhole software and, even earlier, the Skyline TerraServer software. In addition to the impact from the Google name, I argue that what really kicked off Google Earth’s buzz was the high-resolution DigitalGlobe data. Less abstraction equals more buzz.
Microsoft’s addition of Pictometry oblique photography in the local.live.com system goes to this specific point. Oblique photography, though not as “accurate” as an orthophotograph, is much more understandable.
It seems that increasing 3-D resolution is a “good thing,” but there can always be too much of a good thing, even in 3-D data. Consider, for example, the 3-D data created by laser scanners, either aircraft (e.g., light detection and ranging (LIDAR)) or terrestrial (e.g., high-density survey (HDS)). With these technologies, it’s easy to create datasets of millions, if not billions, of x,y,z data points. Viewers (and computers) often are overwhelmed with data, and a process of abstraction is needed to reduce the detail to make it meaningful.
The Push
Although there may be several good conceptual and experiential reasons that 3-D approaches are currently a “big deal,” there are two other conditions necessary for the current (and greater future) explosion. One relates to development in technologies that make 3-D possible (the technology push), and the other deals with the business reasons for which they’re used (the pull).
Many of the technology drivers were initially developed in isolation, but people are increasingly seeing the value in integration. For example, satellite imagery flows into geographic exploration systems, as do computer-aided design (CAD) models. There are growing “semi-permeable membranes” between previous 3-D data and technology silos.
The result is a more immersive, richer experience for consumers. As the dot-com boom (and bust) demonstrated that “content is king,” the integration of multiple technology products into the new 3-D world is no different-it’s even more true. So where does all the 3-D content come from?
I’ll start out with the push, and this article looks at the broad themes. The next article will look at some examples of specific products and how they fit into this structure as well as the business processes that are pulling the 3-D activities.
The main threads that serve as key 3-D technology drivers are the well-known and fundamental technological improvements in computer systems, including increased performance of memory, graphics, disks and display. 3-D data, display and analysis places massive demands on all aspects of a computer system as well as on the network to deliver data at high speeds. Basic computational capabilities are needed to support all aspects of the 3-D world-and even 2.5-D displays such as those provided by geographic exploration systems.
lAll LIDAR returns (left) provide information on tree and building heights and are a true 3-D dataset. When converted to “bare earth” (right), they provide useful elevation data, but other 3-D information is lost.
Stereo Viewing
A specific example of this general development is found in stereo viewing. High-quality stereo viewing requires good graphics, high refresh rates and some method (often specialized glasses) to ensure that each eye gets its own image. Fortunately, the performance of such systems is increasing, while prices are decreasing.
For example, about a decade ago, the Center for Advanced Spatial Technologies (CAST) obtained a 3-D stereo photogrammetric system, and its commercial cost was about $300,000. Last year, CAST obtained hardware systems that were more than comparable for $6,000.
Unfortunately, the geospatial community can’t take credit for these improvements. It’s clear that the computer gaming market is a key pull in such improvements, and, without it, 3-D applications still would be limited.
Stereo viewing systems are essential for many aspects of 3-D data development (using softcopy photogrammetry), and this area is well developed with many vendors. General stereo viewing, however, is more limited, but this will change.
There are now several easy-to-use stereo-viewing solutions, and this will be a developing arena. It’s likely that viewing an area in stereo will be as common in the future as viewing it on a map is today. For this to be true, however, the growth of stereo viewing must be paralleled by a growth in stereo data.
Today, many end users are requesting high-quality planimetric data as well as the actual stereo imagery and background information necessary to measure and view information. For example, the state of Arkansas is currently working with EarthData and its subcontractors, who are using Leica ADS40 four-band sensors to acquire one-meter orthophotography over the entire state (and one-foot imagery in many areas).
lAn AutoCAD file imported into a SoftImage animation package and integrated with traditional maps is used to create an interactive 3-D visitors’ map for the University of Arkansas campus.
In addition to the various orthophoto products, however, the effort will provide statewide raw stereo imagery. The next article in this series will revisit this project and some of the data processing as examples of 3-D data’s various challenges and results.
The project also illustrates developments that are rapidly occurring in 3-D data acquisition technology. The addition of onboard, survey-grade Global Positioning System technology, inertial measurement units and powerful processing software dramatically increases the quality and reduces the time that it takes to acquire data.
In addition to photogrammetric methods to create data from vertical aerial photography, terrestrial photography is exploding as a method to acquire highly accurate and detailed building and other data. Software packages such as PhotoModeler, SocetSet, LPS and others allow relatively inexpensive digital cameras to be used to create detailed, true 3-D data of buildings and other features.
Further Developments
A second key technology push is the rapid evolution in LIDAR and its growing use. LIDAR data can provide massive amounts of x,y,z data with high resolution and accuracy. Managing and using the data presents substantial challenges, and it’s illustrative to briefly consider how LIDAR is being used in the geospatial world to understand the coming changes.
For many, LIDAR represents a faster and more-accurate approach to acquire elevation data. Raw LIDAR data are processed to create a grid of “bare earth” elevation data. In a sense, the process moves raw LIDAR data (inherently a 3-D dataset with its multiple returns) to a 2.5-D representation. New approaches that maintain the multiple returns are increasing the usefulness of LIDAR data for many applications. This is another driver that will be looked at in more detail in a future article.
An approach that’s closely related to LIDAR is HDS, which involves creating a high-density x,y,z point cloud that defines structures, rock faces and other surfaces. In some instances, the technology is actually LIDAR (used horizontally rather than vertically). In others systems, different approaches are used, but the result is the same type of dense x,y,z data. Integrating vertical LIDAR and HDS is an exciting new 3-D field.
Developments outside the traditional geospatial marketplace also are central and sometimes more important components in the 3-D explosion. Newly accessible tools for creating 3-D data in CAD are a dramatic growth area. CAD systems have supported a range of 3-D data and display for some time, but these tools have had a substantial learning curve and complex requirements.
In the last few years, user-friendly 3-D “drafting” packages have been developed, such as SketchUp, ArchiCAD and Revit. These packages also are fostering relationships with geospatial systems. An excellent example is the ability to export SketchUp virtual buildings to geospatial solutions such as ESRI’s ArcGlobe and Google Earth.
There’s also a growing market in visualization and animation software packages. The distinction between animation and visualization is increasingly becoming somewhat arbitrary. But a common development is the ability to integrate and/or constrain the results based on real-world data, whether they’re simply elevation data over which a scene is draped or more complex relationships such as vectors in a GIS that are used to define locations of complex roadways in visualizations.
An example is Autodesk’s January 2006 purchase of Alias, a leader in the animation field with its Maya system. Another example is the real-world interaction of products such as Virtual Nature Studio, which reads GIS datasets and creates realistic virtual landscapes and vegetation.
Another key technology is “motion capture” (sometimes called performance capture), which involves positioning reflective materials on key points on a moving human and tracking location with a series of linked cameras. The process is a specialized application of mathematical methods that underlie traditional photogrammetry. The resulting data are moved into software that animates a virtual character with a real person’s movements. A recent example of this was seen in the movie “Polar Express.”
lSim City 4 from Electronic Arts includes more than 100GB of topographical information from the U.S. Geological Survey and a wealth of real-life buildings that can be imported into this environment. Sims Online is a massively multiplayer online game.
Specialized Products and Professions
Looking at the current market, there are different product lines and “professions” that deal with different aspects of 3-D content. There are specialized products that focus on creating realistic landscapes and vegetation, applying physical rules such as fractal structures of leaves or rocks. Other products and workflows emphasize the built environment.
Geospatial software creates and manages data about the land and its properties. HDS and terrestrial photogrammetry record the details of the existing world, especially its current structures. Animation software takes all the inputs and applies a narrative structure, and motion capture introduces active humans into the result.
Gaming and high-end simulation software engines have aspects of all of the aforementioned elements, but they add two new capabilities: the definition of visual processes based on underlying physics and the related aspect of object interaction. In an animation package, animators would have to specifically define that a ball “dropped” from a table would “fall” down by setting specific key frames where the ball was “on” the table and then “on” the floor. In modern gaming software, “gravity” is a program element, and it can be applied to any object. Drop a ball, and it falls; shoot a monster, and it explodes. Add the ability to distribute this via the Internet, and MMOGs are created.
The technologies behind MMOGs, when integrated with the other aspects, will change the “(Geo)world” completely, and it will happen soon-if not right now. What will the new (Geo)world look like? Where will we get and store all the data? How will we integrate different workflows and (sometimes) conflicting professional attitudes and perspectives? And, perhaps most importantly, why is this inevitable?
I’ll explore such questions in the next segment of “An Impending Massive 3-D Mashup.”
Author’s Note: Many of the issues raised in this article can be found in Sisi Zlatanova and David Prosperi’s book, Large Scale 3-D Data Integration, published by Taylor and Francis in 2006.
No comments:
Post a Comment