• Skip to main content
  • Skip to footer

INT

Empowering Visualization

CONTACT US SUPPORT
MENUMENU
  • Products
    • Overview
    • IVAAP™
    • INTViewer™
    • GeoToolkit™
    • Product Overview
  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Solutions
    • Overview
    • For E&P
    • For OSDU Visualization
    • For Cloud Partners
    • For Machine Learning
    • For CCUS
    • For Geothermal Energy
    • For Wind Energy
    • For Enterprise
    • Tools for Developers
    • Services Overview
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

SDK

Jan 19 2022

An Inside Look at the New Release of the IVAAP Data Backend SDK

INT recently announced the release of IVAAP 2.9. We sat down with Thierry Danard, INT’s VP of Core Platform Technologies, for a quick chat about what this release includes for the Data Backend SDK.

 

Hi Thierry, what can you tell us about IVAAP 2.9 for the Data Backend?

There are a few low-level changes that won’t make it to the release notes, but that will make a difference for users. For example, we optimized the data caching mechanism. Instead of being based on the number of datasets, the data caches are now based on the size of these datasets. This typically means lower latency as more datasets tend to be cached within the same memory space. This is a feature we initially used only with WITSML wells that we have extended to other connectors and data types.

At a high level, we added a powerful “tiled images” feature. Some INT customers have large rasters that they need to visualize, or that their customers need to visualize as part of a portal. These images are stored as good old TIFF image files on a local file system or in the cloud. To visualize these images in IVAAP, you just need to point at the location of these tiles, and IVAAP does the rest. While the idea of visualizing images online might seem mundane, there is quite a lot of technology to make this happen.

First, these image files are LARGE. They can be up to 4 GB in TIFF format, and that’s a compressed format! To make the visualization of these images seamless, you need to be really good at:

  1. Downloading these files from the cloud, fast
  2. Reading these images in their native format
  3. Rendering these images as small tiles and sending these tiles to the viewer efficiently
  4. Stitching these images back together as one raster on the viewer side
  5. Caching these images long enough, but not too long
  6. Doing all this concurrently, for a large number of simultaneous users

That’s the technology side of it. From a business perspective, what’s really interesting is that there is no preprocessing of the images required. The ingestion workflow requires no specialized tooling. There’s no need to “precut” each TIFF as multiple tiles at multiple resolution levels. The “cutting” is all done on the fly and it’s seamless to end users. This is particularly important to keep storage to a minimum when you migrate your image library to the cloud. When each TIFF file takes gigabytes, a library of only a few thousand files already has a significant footprint, so you don’t want to expand that footprint with file duplication.

 

Is this image tiling feature something I could use for my own image storage?

Yes. The technology we developed makes abstraction of the storage mechanism possible. The reading, cutting, rendering, and caching are part of the IVAAP Data Backend SDK’s API. You can reuse these parts in your code.

 

What’s the purpose of this SDK, and how does it apply to images?

One of the strengths of IVAAP is that it allows the visualization of data from multiple data sources. For example, just for visualization of wells, IVAAP supports Peloton WellView, PPDM, WITSML, LAS files in the cloud, etc. These data sources typically share a common data model, but what’s different between them is the medium or protocol to access this data. The typical use case of the IVAAP Data Backend SDK is to facilitate the implementation of that access layer.

Images are treated as data. Just like we have “well” web services, we have “image” web services. The image web services are generic in nature—it’s the same service code running regardless of their underlying storage mechanism. It’s only the access code that differs. As a matter of fact, we implemented access to four types of image stores on top of our SDK:

  • Images stored locally on disk
  • Images stored in Amazon S3
  • Images stored in Microsoft Azure Blob Storage
  • Images stored in Google Cloud Storage

In this particular example, we used our own SDK to develop both the data layer and the web service layer.

 

Is the IVAAP Data Backend SDK a web framework?

While this SDK does provide an API to create your own web services, it goes well beyond that. If you call the IVAAP Data Backend SDK a framework, then it’s a highly specialized framework. And it’s not just a web framework, it’s also a data framework. Also, web frameworks tend to force you into a container. To scale your application, you are limited by the capabilities of this container. The IVAAP Data Backend SDK abstracts the container away, giving customers multiple options for how to deploy it and scale it.

From a technical perspective, a good SDK should empower developers. Often, this means requiring the least amount of effort to get the job done. From a business perspective, a good SDK should reduce development costs. Calling the IVAAP Data Backend SDK “yet another web framework” is missing the point of its value. Its value is not about doing the same thing as your favorite “other framework,” it’s about fitting your use case in the most effective way.

 

Then … how is the IVAAP Data Backend SDK a good fit?

Obviously, the integration with the rest of IVAAP is a strong benefit. You could write your data access object (DAO) layer using the API of your choice, but you would still need to write more mundane aspects, such as:

  • Exposing this data in a REST API, following the exact protocol that the IVAAP client expects. There would be hundreds of REST services to implement.
  • Integrating with services from the Admin server. For example, how users are identified, how to find out who has access to what, what are the datasets within a project, etc.

The second fit is that the SDK is designed to work with geoscience data. It integrates the data models of multiple data types, such as wells, seismic, surfaces … and raster images. While the IVAAP Data Backend SDK can be used outside of a geoscience context, a good portion of its API is geoscience-specific.

During design, we were particularly careful not to make any assumptions about how our customers’ geoscience data is stored. No matter how exotic your data systems might be, the IVAAP Data Backend will be able to access them. We are not just talking about storage, we are also talking about real-time feeds and machine learning (ML) workflows. We verified this multiple times. One particularly visible example is OSDU. INT has been on the leading edge of OSDU development, following the multiple iterations or flavors of OSDU services without requiring a change to our SDK.

 

How is the IVAAP Data Backend SDK effective?

Its API is quite simple to learn. Multiple INT customers have used this SDK to develop their own IVAAP customizations, and its API has always been well received. One aspect that is particularly liked is the lookup architecture. Plugging a class requires no XML, just one Java annotation, and it’s the same annotation used across the entire API. It’s an elegant mechanism, easy to learn, and even easier to use.

The real proof of the IVAAP Data Backend SDK’s effectiveness is the time it takes external developers to learn it and develop their own connector. We tested the effectiveness of the SDK by asking new hires to develop a connector, only armed with their knowledge of Java. Without any prior geoscience knowledge, the average time to get that connector up and running has been 2 weeks.

Another way that the backend keeps developers effective is by not taking away their favorite development tool. While the IVAAP Data Backend uses a powerful cluster architecture in production deployments, the typical developer’s day is with their favorite IDE (NetBeans, Eclipse, IntelliJ, etc.) and a well-known application server (Tomcat, Glassfish). Development is much faster when you don’t need to launch an entire cluster for a simple debugging session.

Effectiveness and fit are key. The goal of a typical framework is to help you get started faster. It provides a shortcut to skip the implementation of the mundane concerns of an application. In IVAAP’s case, for many customers, the application itself is already written. For these customers, the IVAAP Data Backend SDK helps you get finished faster instead. It provides customers with the API to finish the last mile of an IVAAP deployment, the access layer to their proprietary data store.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.

Filed Under: IVAAP Tagged With: backend, data, IDE, image, ivaap, OSDU, raster, SDK, TIFF

Jun 17 2021

How to Extend the IVAAP Data Model with the Backend SDK

The IVAAP Data Backend SDK’s main purpose is to facilitate the integration of INT customers’ data into IVAAP. A typical conversation between INT’s technical team and a prospective customer might go like this:

Can I visualize my data in IVAAP?

Yes, the backend SDK is designed to make this possible, without having to change the UI or even write web services.

What if my data is stored in a proprietary data store?

The SDK doesn’t make any assumptions on how your data is stored. Your data can be hidden away behind REST services, in a SQL database, in files, etc.

What are the steps to integrate my data?

The first step is to map your data model with the IVAAP data model. The second step is to identify, for each entity of that data model, how records are identified uniquely. The third step is to plug your data source. The fourth and final step is to implement, for each entity, the matching finders.

What if I want to extend the IVAAP data model?

My typical answer to this last question is “it depends”. The SDK has various hooks to make this possible. Picking the right “hook” depends on the complexity of what’s being added.

Using Properties

The IVAAP data model was created after researching the commonalities between the industry data models. However, when we built it, we kept in mind that each data store carries its own set of information, information that is useful for users to consume. This is why we made properties a part of the data model itself. From an IVAAP SDK point of view. Properties is a set of name-value pairs that you can associate with specific entities within the IVAAP Data Model.

For example, if a well dataset is backed by an .LAS file in Amazon S3 or Microsoft Azure Blob Storage, knowing the location of that file is a valuable piece of information as part of a QC workflow. But not all data stores are backed by files, a file location is not necessarily relevant to a user accessing data from PPDM. As a result, the set of properties shown when opening a dataset backed by Azure will typically be different from a set coming from PPDM, even for the same well.

 

properties dialog example
An example of properties dialog, showing multiple properties of a seismic dataset stored in Amazon S3.

 

 

Calling the properties a set of name-value pairs does not do justice to its flexibility. While a simple name+value is the most common use, you can create a tree of properties and attach additional attributes to each name-value pair. The most common additional attribute is a unit, qualifying the value to make this property more of a measurement.  Another attribute allows name-value pairs to be invisible to users. The purpose of invisible properties is to facilitate the integration with other systems than the IVAAP client. For example, while a typical user might be interested in the size of a file, this size should be rounded and expressed in KB, MB or GB. An external software consuming the IVAAP Properties REST services would need the exact number of bytes.

One of the benefits of using properties to carry information is that it’s simple to implement your own property finder, and it requires no additional work. No custom REST services to write, and no widget to implement to visualize these properties. The IVAAP HTML5 client is designed to consume the IVAAP services of the data backend, and show these properties in the UI.

Adding Your Own Tables and Documents

One of the limitations of properties is they don’t provide much interaction. Users can only view properties. The simplest way to extend the IVAAP model in a way that users can interact with that data is to add tables. For example, the monthly production of a well is an easy table  to make accessible as a node under a well. Once the production of a well is accessible as a table, users have multiple options to graph this production: as a 2D Plot, as a pie chart, as an histogram, etc. And this chart can be saved as part of a dashboard or a template, making IVAAP a portal.

The IVAAP Data Backend SDK has been designed to make the addition of tables a simple task. Just like for properties, the HTML5 Viewer of IVAAP doesn’t need to be customized to discover and display these tables. It’s the services of the data backend that direct the viewer on how to build the data tree shown to users. And while the data backend might advertise many reports, only non-empty reports will be shown as nodes by the viewer. 

 

tabular reports
An example of tabular reports related to a well.

 

 

In the many customization projects that I’ve been involved in, the tabular features of IVAAP have been used the most. I have seen dozens of reports under wells. The IVAAP Data Backend makes no assumptions about where this production data is stored relative to where the well is stored. For example, you can mix the schematics from Peloton WellView with the production reports from Peloton ProdView. From a user point of view, the source of the data is invisible, IVAAP combines the data from several sources in a transparent way. Extending the IVAAP data model doesn’t just mean exposing more data from your data source, it also means enriching your data model with data from other sources.

Data enrichment is sometimes achieved just by making accessible the documents associated with a well. For example, for Staatsolie’s portal, the IVAAP UI was giving direct access to the documentation of a well, stored in Schlumberger’s ESearch.

 

PDF document related to a well
An example of PDF document related to a well.

 

 

Adding Your Own Entities and Services

When data cannot be expressed as properties, tables or documents, the next step is to plug your own model. The API of the Backend SDK makes it possible to plug your own entities under any existing entity of the built-in data model. In this use case, not only code to access data needs to be developed, but also code to expose this data to the viewer. The IVAAP data model is mature, so this is a rare occurrence.

There are hundreds of services implemented with the IVAAP Data Backend SDK, developers who embark on a journey involving adding their own data types can be reassured by the fact that the path they follow is the same path the INT developers follow every day as we augment the IVAAP data model. INT makes use of its own SDK every day.

 

IVAAP Data Backend SDK Homepage
Home page of the website dedicated to the IVAAP Data Backend SDK.

 

 

Whether IVAAP customers need to pepper the IVAAP UI with proprietary properties or their own data types, these customers have options. The SDK is designed to make extensions straightforward, not just for INT’s own developers, but for INT customers as well. You do not need to contract INT’s services to roll your own extensions. You can, but you don’t have to. When IVAAP gets deployed, we don’t just give you the keys to IVAAP as an application, we also give you the keys to IVAAP as a platform, where you can independently control its capabilities.

For more information on IVAAP, please visit www.int.com/products/ivaap/

 

Filed Under: IVAAP Tagged With: backend, data, html5, ivaap, SDK

May 20 2021

Deploying IVAAP Services to Google App Engine

One of the productivity features of the IVAAP Data Backend SDK is that the services developed with this SDK are container-agnostic. Practically, it means that a REST service developed on your PC using your favorite IDE and deployed locally to Apache Tomcat will run without changes on IVAAP’s Play cluster.

While the Data Backend SDK is traditionally used to serve data, it is also a good candidate when it comes to developing non-data-related services. For example, as part of IVAAP 2.8, we worked on a gridding service. In a nutshell, this service computes a grid surface based upon the positions of a top across the wells of a project. When we tested this service, we didn’t deploy it to IVAAP’s cluster; it was deployed as a standalone application, as a servlet, on a virtual machine (VM).

Deploying Apache Tomcat on a virtual machine is “old school”. Our customers are rapidly moving to the cloud, and while VMs are often a practical choice, other options are sometimes available. One of these options is Google App Engine. Google App Engine is a bit of a pioneer of cloud-based deployments. It was the first product that allowed servlet deployments that scale automatically, without having to worry about the underlying infrastructure of virtual machines. This “infinite” scalability comes with quite a few constraints, and I was curious to find out whether services developed with the IVAAP Data Backend SDK could live within these constraints (spoiler alert: it can).

Synchronous Servlet Support

The first constraint was the lack of support for asynchronous servlets. Google App Engine doesn’t support asynchronous servlets and the IVAAP servlet shipped with the SDK is strictly asynchronous. Supporting the synchronous requirements of Google App Engine didn’t take much time. The main change was to modify the concrete implementation of
com.interactive.ivaap.server.servlets.async.AbstractServiceRequest.waitForResponse
and wait on a java.util.concurrent.CountDownLatch instead of calling javax.servlet.startAsync().

Local File Access

The second constraint was the lack of a local file system. Google App Engine doesn’t let developers access the local files of the virtual machine where an application is deployed. The IVAAP Data Backend SDK typically doesn’t make much use of the local file system, except at startup when it reads its service configuration. To authorize users, the services developed with the IVAAP Data Backend SDK need to know how to validate Bearer tokens, and this validation requires the knowledge of the host name of the IVAAP Admin Backend. The Admin Backend exposes REST services for the validation of Bearer tokens. To support Google App Engine, I had to make the discovery of these configuration files pluggable so that they can be read from the WEB-INF directory of the servlet instead of a directory external to that servlet.

Persistence Mechanism

The third constraint was the lack of persistence. Google App Engine doesn’t provide a way to “remember” information between two HTTP calls. To effectively support computing services, a REST API cannot make an HTTP client “wait” for the completion of this computing. The computation might take minutes, even hours. The REST API of a computing service has to give a “ticket” number back to the client when a process starts, and provide a way for this client to observe the progress of that ticket, all the way to the completion. In a typical servlet deployment, there are many options to achieve this: the service can use the Java Heap to store the ticket information or use a database. To achieve the same result with Google App Engine, I needed to pick a persistence mechanism. For simplicity’s sake, I picked Google Cloud Storage. The state of each ticket is stored as a file in that storage. 

Background Task Executions

The fourth constraint was the lack of support for background executions. Google App Engine by itself doesn’t allow processes to execute in the background. Google however provides integration with another product called Google Cloud Tasks. Using the Google Cloud Tasks API, you can submit HTTP requests to a queue, and Google Cloud Tasks will make sure these requests get executed eventually. Essentially, when the gridding service receives an HTTP request, it creates a ticket number, submits this HTTP request immediately to Google Cloud Tasks, which in turn calls back Google App Engine. The IVAAP service recognizes that the call comes from Google Cloud Tasks and stores the result to a file in Google Cloud Storage instead of the servlet output stream. It then notifies the client that the process has completed.

Here’s a diagram that describes the complete workflow: 

INT_GCP_Workflow

Constraints and Considerations

While the SDK did provide the API to implement this workflow out of the box, getting this to work took a bit of time. I had to learn 3 Google products at once to get it working. Also, I encountered obstacles that I will share here so that other developers benefit:

  1. The first obstacle was that the Java SDK for Google App Engine requires the Eclipse IDE. There is no support for the NetBeans IDE. I am more proficient with NetBeans.
  2. The second obstacle was that I had to register my Eclipse IDE with Google so I can deploy code from that environment. It just happened that that day, the Google registration server was having issues, blocking me from making progress.
  3. The third obstacle was the use of Java 8. The Google Cloud SDK required Java 8, but Eclipse defaulted to Java 11. It took me a while to understand the arcane error messages thrown at me.
  4. The fourth obstacle was that I had to pick a flavor of Google App Engine, either “Standard” or “Flexible”. The “Standard” option is cheaper to run because it doesn’t require an instance running at all times. The “Flexible” option has less warmup time because there is always at least one instance running. There are many more differences, not all of them well documented. The two options are similar, but do not share the same API. You don’t write the same code for both environments. In the end, I picked the “Standard” option because it was the most constraining, better suited to a proof of concept.
  5. The fifth obstacle was the confusion due to the word “Promote” used by the Google SDK when deploying an instance. In this context, “Promote” doesn’t mean “advertising”, it means “production”. For a while, I couldn’t figure out why my application wouldn’t show any changes where I expected them. The answer was that I didn’t “promote” them.
  6. The last obstacle was the logging system. Google has a “Google Logging” product to access logs produced by your application. Logging is essential to debugging unruly code that you can’t run locally. Despite several weeks of use, I still haven’t figured out how this product really works. It is designed to be used to monitor an application in production, not so much for debugging. Debugging with logs is difficult. There might be several reasons why you can’t find a log. The first possibility is that the code doesn’t go where you think it’s going, and the log is not produced. The second possibility is that the log was produced, but I am too impatient, there is a significant delay and it hasn’t shown up yet. The third possibility is that it has shown up, but is nested inside some obscure hierarchy, and you won’t see it unless you expand the entire tree of logs. The log search doesn’t help much and has some strange UI quirks. I found that the most practical way to explore logs is to download them locally, then use the search capabilities of a text editor. Because the running servlet is not local to your development environment, debugging a Google App Engine application is a time-consuming activity.

In the end, the IVAAP Data Backend SDK passed this proof of concept with flying colors. Despite the constraints and obstacles of the environment, all the REST services that were written with the IVAAP Cluster in mind are compatible with Google App Engine, without any changes. Programming is hard, it’s an investment in time and resources. Developing with the IVAAP Data Backend SDK preserves your investment because it makes a minimum amount of assumptions on how and where you will run this code.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.

Filed Under: IVAAP Tagged With: API, cloud, Google, Google App Engine, ivaap, SDK

Nov 09 2020

Human Friendly Error Handling in the IVAAP Data Backend

As the use cases of IVAAP grow, the implementation of the data backend evolves. Past releases of IVAAP have been focused on providing data portals to our customers. Since then, a new use case has appeared where IVAAP is used to validate the injection of data in the cloud. Both use cases have a lot in common, but they differ in the way errors should be handled.

In a portal, when a dataset fails to load, the reason why needs to stay “hidden” to end-users. The inner workings of the portal and its data storage mechanisms should not be exposed as they are irrelevant to the user trying to open a new dataset. When IVAAP is used to validate the results of an injection workflow, many more details about where the data is and how it failed to load need to be communicated. And these details should be expressed in a human friendly way.

To illustrate the difference between a human-friendly message and a non-human friendly message, let’s take the hypothetical case where a fault file should have been posted as an object in Amazon S3,… but the upload part of the ingestion workflow failed for some reason. When trying to open that dataset, the Amazon SDK would report this low-level error: “The specified key does not exist. (Service S3, Status Code: 404, Request ID: XXXXXX)”. In the context of an ingestion workflow, a more human friendly message would be “This fault dataset is backed by a file that is either missing or inaccessible.”

 

Screen Shot 2020-11-05 at 4.28.12 PM

 

The IVAAP Data Backend is written in Java. This language has a built-in way to handle errors, so a developer’s first instinct is to use this mechanism to pass human friendly messages back to end users. However, this approach is not as practical as it seems.  The Java language doesn’t make a distinction between human-friendly error messages and low-level error messages such as the one sent by the Amazon SDK, meant to be read only by developers. Essentially, to differentiate them, we would need to create a HumanFriendlyException class, and use this class in all places where an error with a human-friendly explanation is available.

This approach is difficult to scale to a large body of code like IVAAP’s. And the IVAAP Data Backend is not just code, it also comes with a large set of third-party libraries that have their own idea of how to communicate errors. To make matters worse, It’s very common for developers to do this:

 

    try {

             // do something here

        } catch (Exception ex) {

            throw new RuntimeException(ex);

        }

 

This handling wraps the exception, making it difficult to catch by the caller. A “better” implementation would be:

 

     

try {

             // do something here

       } catch (HumanFriendlyException ex) {

            throw ex;

        } catch (Exception ex) {

            throw new RuntimeException(ex);

        }

 

While this is possible to enforce this style for the entirety of IVAAP’s code, you can’t do this for third party libraries calling IVAAP’s code.

Another issue with Java exceptions is that they tend to occur at a low-level, where very little context is known. If a service needs to read a local file, a message “Can’t read file abc.txt” will only be relevant to end users if the primary function of the service call was to read that file. If reading this file was only accessory to the service completion, bubbling up an exception about this file all the way to the end-user will not help.

To provide human-friendly error messages, IVAAP uses a layered approach instead:

  • High level code that catches exceptions reports these exceptions with a human friendly message to a specific logging system
  • When exceptions are thrown in low level code, issues that can expressed in a human friendly way are also reported to that same logging system

With this layered approach where there is a high-level “catch all”, IVAAP is likely to return relevant human friendly errors for most service calls. And the quality of the message improves as more low-level logging is added. This continuous improvement effort is more practical than a pure exception-based architecture because it can be done without having to refactor how/when Java exceptions are thrown or caught. 

To summarize, the architecture of IVAAP avoids using Java exceptions when human-friendly error messages can be communicated. But this is not just an architecture where human-friendly errors use an alternate path to bubble up all the way to the user. It has some reactive elements to it.

For example, if a user calls a backend service to access a dataset, and this dataset fails to load, a 404/Not Found HTTP status code is sent by default with no further details. However, if a human friendly error was issued during the execution of this service, the status code changes to 500/Internal Server Error, and the content of the human friendly message is included in the JSON output of this service. This content is then picked up by the HTML5 client to show to the user. I call this approach “reactive” because unlike a classic logging system, the presence of logs modifies the visible behavior of the service.

With the 2.7 release of IVAAP, we created two categories of human friendly logs. One is connectivity. When a human friendly connectivity log is present, 404/Not Found errors and empty collections are reported with a 500/Internal Server Error HTTP status code. The other is entitlement. When a human friendly entitlement log is present, 404/Not Found errors and empty collections are reported with a 403/Forbidden HTTP status code.

The overall decision on which error message to show to users belongs to the front-end. Only the front-end knows the full context of the task a user is performing. The error handling in the IVAAP Data Backend provides a sane default that the viewer can elect to use, depending on context. OSDU is one of the deployments where the error handling of the data backend is key to the user experience. The OSDU platform has ingestion workflows outside of IVAAP, and with the error reporting capabilities introduced in 2.7, IVAAP becomes a much more effective tool to QA the results of these workflows.

For more information on INT’s newest platform, IVAAP, please visit www.int.com/products/ivaap/

 

Filed Under: IVAAP Tagged With: data, ivaap, java, SDK

Sep 09 2020

INT Adds Client SDK and Improves Seismic and Subsurface Visualization Performance with Latest Release of IVAAP 2.6

This release confirms IVAAP as a leader in the subsurface data visualization space, supporting Data Visualization and Data Management for Subsurface, Exploration, Drilling, or Production Applications in the cloud.

Houston, TX — INT is pleased to announce the newest release of its HTML5 upstream visualization platform, IVAAP™ 2.6, accelerating the pace of our release cycle to respond faster to the growing needs of our clients and partners.

IVAAP now offers a robust client SDK so users can extend the platform to meet their unique business objectives — now users can write new and extend existing widgets; add or remove existing modules; create new data connectors; extend the data model; and more. 

IVAAP 2.6 offers several key enhancements that mean better performance and faster data interaction in the cloud (Azure, AWS, Google): more interpretation capabilities with added fault picking in 2D seismic; improved map-based search with multiple criteria for OSDU R2; UI and navigation improvements; support for real-time wireline logging with decreasing depth; and a new connector to the Energy Information Administration (EIA) database.

“For this latest release, we wanted to focus on features and improvements that would complement our partnership with OSDU and enhance our cloud offerings,” said Hugues Thevoux-Chabuel, Vice President, Cloud Solutions at INT. “We gathered feedback from developers and end users to better understand their needs and worked hard to ensure IVAAP exceeds the current and near-future demands of our clients.”

IVAAP is an upstream visualization platform that enables search and visualization of geophysical, petrophysical, and production data in the cloud. It allows product owners, developers, and architects to rapidly build next-level subsurface digital solutions anywhere without having to start from scratch. 

Read the press release on PRWeb >

Check out int.com/ivaap for a preview of IVAAP or for a demo of INT’s other data visualization products. 

For more information, please visit www.int.com or contact us at intinfo@int.com.

Filed Under: IVAAP, Press Release Tagged With: faults, ivaap, SDK, seismic

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Footer

Solutions

  • For E&P
  • For OSDU Visualization
  • For Cloud Partners
  • For Machine Learning
  • For CCUS
  • For Geothermal Energy
  • For Wind Energy
  • For Enterprise
  • Tools for Developers
  • Customer Success Stories

Products

  • IVAAP
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact


© 1989–2022 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2022 INTERACTIVE NETWORK TECHNOLOGIES, Inc