• Skip to main content
  • Skip to footer

INT

Empowering Visualization

CONTACT US SUPPORT
MENUMENU
  • Products
    • Overview
    • IVAAP™
    • INTViewer™
    • GeoToolkit™
    • Product Overview
  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Solutions
    • Overview
    • For E&P
    • For OSDU Visualization
    • For Cloud Partners
    • For Machine Learning
    • For CCUS
    • For Geothermal Energy
    • For Wind Energy
    • For Enterprise
    • Tools for Developers
    • Services Overview
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

data visualization

Jul 06 2021

Rethinking ML Integration to Deliver a User Experience with a True End-to-End Geoscience Workflow

For E&P companies, the next challenge in their digital transformation — once their data has been properly stored, indexed, enriched, and cataloged in the cloud — is to make it available in a collaborative way where users can easily interact with the data through exploration, computation, and analysis. To create this digital workspace, companies must fully integrate machine learning, along with advanced data visualization, in a single platform where users can search, select data from multiple data sources, execute models, and visualize the results.

Accelerating the Transition from R&D to Operations
While many companies have begun the shift toward using machine learning, many have not seamlessly integrated ML. Implementing ML is the goal, but many get stuck along the way, weighed down by cumbersome processes or siloed systems. So the first challenge is transitioning the ML process from R&D to operations, where the model is fully deployed and used by data scientists.

ML-chart
Source: World Wide Technology

 

In a typical process, once the data is prepared and cleaned, it is split and labeled for training in order to understand whether the model is working properly or not. Then the model is moved into operations, data is fed into it, and finally, the user can see the output. The process from development to production, R&D to operations is very, very slow, even with continuous integration and deployment pipelines. This is where a centralized solution can help to eliminate the need to move data from one system to another or have to build another application to consume, compute, and visualize the data.

Screen Shot 2021-07-06 at 2.55.27 PM
SOURCE: State of Data Science 2020. Anaconda. www.anaconda.com/state-of-data-science-2020.

The Drivers for Centralizing Data Exploration, ML Execution, and Domain Visualization
Integrating machine learning into geoscience workflows has traditionally posed many challenges for data scientists, from siloed, incomplete data to disjointed, disconnected systems. Even now, once geoscientists spend up to 45% of their time ensuring that the data is uniform, organized, and labeled correctly, they must then switch to another application to execute the model, another to view the results, and yet another to share the results with their team. By combining these processes in one place, companies can get the most from their data — the most accurate models, with the most accurate business insights.

 

The Emergence of New Visualization Technologies Leveraging ML to Power Data-Driven Decisions
For true ML integration, companies are challenged to rethink the user experience and find a way to utilize a single platform that can simplify this process, from collecting and cleaning data to training and evaluating the model to using the model to power data-driven business decisions. This is why many companies are looking at “re-platforming” existing apps or simply rebuilding new apps that can combine features spread across multiple applications. However, companies do not need to go down that path anymore with the emergence of a new generation of data visualization cloud-native platforms such as IVAAP.

IVAAP is a new way to connect the dots. The cloud-native client is creating a single place for users such as geoscientists and data scientists to conduct all necessary steps in MLOps: data exploration, ML execution, and visualization. The platform offers a digital workspace connecting to the various back-end systems seamlessly for end-end users.

Screen Shot 2021-07-06 at 8.50.54 AM
INT’s IVAAP integrates with ML Service and Data Storage

 

In the example above, the user can access search functions, data sources, and various ML environments such as AWS SageMaker to create a true end-to-end machine learning integration. Streamlining and simplifying the geoscience workflow starts with the end-user experience: the user can be presented with specific data input/output dynamically based on the processing function or ML programs he is executing. IVAAP’s dynamic UI delivers a powerful way for data scientists to execute ML in geoscience.

If you are interested in learning more, you can also check out the recent AWS/INT webinar for a deeper dive into how the technology works, dynamic UI, and ML integration.

Learn more about IVAAP here or contact us at info@int.com.


Filed Under: IVAAP Tagged With: data visualization, ivaap, machine learning, streamline workflows

Jan 28 2021

How OSDU Can Help Data Management

Discussing data management challenges with major oil companies, national oil companies (NOCs), and oil services companies over the last few months, we found that it is still quite difficult to find published metrics about their KPIs. During a recent conversation with an operator, for example, he shared that finding the right data for analysis could take anywhere from one to SEVEN weeks.

data-viz-geo-challenge
Source: Lee C Lawyer Chevron Chief Geophysicist Oil & Gas Journal Nov. 4, 1991, pp 51-52.

Whether afraid releasing this data might expose inefficiencies or simply that the data is proprietary, what is interesting is that this problem persists—multiple people doing the same checks, companies can’t find their data, and what they can find, they don’t trust. For most geologists and geoscientists, data validation—validating, correcting, and verifying data—takes between 30 and 90 percent of their time before they can even begin to use the data. This challenge has become a major blocker as they transition from human analysis to Artificial Intelligence (AI) and Machine Learning (ML) to automate tasks and decisions.

The problems lie in how application systems have been built over time: in silos. To quote Teradata’s Jane McConnell in her recent blog post on OSDU, “Keeping data in separate systems with separate indexes, separate master data management issues, and often separate physical hardware, only means extra work, master data management problems, and unnecessary hassle when we try to bring the data together so we can analyze it as a whole.”

So, how do we fix it? OSDU data standardization with IVAAP data visualization is one approach.

What Is OSDU?

The Open Subsurface Data Universe™ (OSDU) Forum is an Industry Forum formed to establish an open subsurface Reference Architecture. OSDU is created around a simple idea: can you find, use, and trust your data? 

The objective is to move from a traditional model based on multiple types of data into a single integrated data model. The digital transformation approaches data as an asset that can be used throughout various stages of the workflow and applications, decoupled of the native workflow and application. 

data-viz-silos

The OSDU mission is to establish an open subsurface Reference Architecture as follows:

  • A cloud-native data platform reference architecture, with usable implementations for Microsoft Azure, Amazon AWS, and Google GCP
  • Application Standards (APIs) to ensure that all applications (microservices), developed by various parties, can run on any OSDU data platform
  • Leverage Industry Data Standards for frictionless integration and data access

OSDU Value Proposition: Access and Acceleration

To increase data accessibility, data can no longer be kept siloed. Companies must now accelerate their digital transformation by taking advantage of the growing OSDU marketplace and the rapid adoption of new solutions using OSDU APIs.

osdu-environment
OSDU standardizes and secures data currently spread across applications in different formats.
 
The OSDU data platform architecture helps separate data from its native application—from the workflow and from the storage infrastructure. It is indexed, discoverable, and consumable. This evolution is critical as we automate analysis with AI, ML, etc. Digitalization requires consistency and confidence in all data and standards for data and metadata to take out the guesswork.

Improve Findability, Collaboration, and Data Exchange

To improve not only data management workflows, but also to increase collaboration across teams, the smart approach is to combine a common data standard (OSDU) and leverage enterprise viewer technology (IVAAP).

Data Standardization: Findability and Discovery

A common data standard improves findability, using powerful search engines to reduce time to find the right data from weeks to days. When possible, it leverages data lakes to access all data from a single repository or a hybrid model to aggregate data in place. Removing the bottleneck of ”person dependent” models means immediate access and data discovery with security at the object level.

Data Visualization: Collaboration and Data Exchange

The ability to view multiple sources of data in a single dashboard is a critical piece of the new data management workflow. It makes data exchange easier, enables process optimization and better QC, and translates to better visibility for KPIs. Enabling data selection directly from the search eliminates the need to move data, and aggregating data in a single, shared dashboard means faster collaboration and better decision-making.

Empower Better Data Management

Data management can benefit from a single point of access of all subsurface data, simplifying data exchange, sharing, and consumption. For data managers, OSDU can enable new technology solutions that aggregate data, simplify search, and improve discoverability. Paired with the right subsurface data visualization platform technology, OSDU can pave the way to automate tedious tasks, workflows, and analysis, ultimately providing quicker information to stakeholders for faster decisions.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/


Filed Under: IVAAP Tagged With: data management, data standards, data visualization, ivaap, OSDU

Jan 19 2021

The Importance of Integrating Visualization and Collaboration in a Digital Transformation Strategy

The oil and gas industry is experiencing significant change. Aging workforces, low oil prices, and pressure to reduce environmental footprint are all forcing E&P companies to rethink how they operate; which is why there is so much interest in embracing digital transformation. But digital transformation is more than the use of new digital technologies to optimize workflows and improve business processes. It is the strategic application of technology, data, and people in order to achieve more desirable business outcomes. 

Digital Transformation in Oil and Gas Is Fueled by New Technologies

  • IoT and drones for data collection
  • The cloud and associated data lakes for data management
  • Real-time data streaming for asset management
  • Machine Learning for analysis

Consider an approach to 3D seismic acquisition using drones and IoT, where real-time visualization of data captured in the field plays a key role in assuring accuracy and the highest quality data possible. 

Seismic acquisition in complex environments with hard-to-access topography, such as dense rainforest, is expensive and hazardous. And too often, the resulting seismic imaging is of less-than-desired quality as well. To address some of these issues, TotalEnergies R&D created the METIS project (Multiphysics Exploration Technology Integrated System), with the objective of improving data quality and speed of data acquisition through real-time quality control and processing. TotalEnergies, in partnership with Wireless Seismic and SAExploration, conducted a seismic acquisition experiment in a remote area of New Guinea, using drones and Wireless Seismic’s Downfall Air Receiver Technology (DARTs). Dropped from large swarms of drones over a targeted area, DART seismic receivers can be an efficient option for gathering data in difficult-to-access areas, but to be effective they must be well-positioned and operational. Therefore, it is necessary to quickly validate the positioning and data quality from the DARTs. 

To confirm that DARTs are positioned correctly, data is captured from each DART receiver during the drop, and proprietary calculations are used to monitor how successful the geophone drop has been. Immediate visualization of the drop analyses is essential to the operation. If any of the DARTs fail to land properly or seismic data is faulty or missing, the system command center can make adjustments in real-time to optimize the acquisition plan. This real-time monitoring and optimization minimize data problems that might otherwise require reshooting parts of the survey: which would be an expensive waste of resources, time, and money.

Effective Collaboration Between Oil Companies and Service Companies Is Required

Effective collaboration does not mean giving up proprietary intellectual property, but users on both sides must be prepared to be flexible and willing to integrate. We can start by making data accessible to all the stakeholders in a project. Too often, information is siloed as various disciplines tend to use their own local systems and work independently. They don’t share ideas and seek advice from each other as much as they should. By getting data out of the local systems and centralizing that data in the cloud, it can be made accessible to anyone who needs it. 

Data also needs to be easier to find. Oftentimes, explorationists spend up to 40% of their time not just searching for data but searching for the right data. We need improved data delivery so that explorationists have the right data at their fingertips when they need it, including better incorporation of real-time data in the decision-making process. We also need to make it possible to integrate disparate data. This will be especially important for effective machine learning, analytics, and advanced visualization.

Adoption of cloud technologies and cloud data lakes services makes it possible to create metadata and indexes which can be used to organize all the data in a project—thereby facilitating intelligent searches and quick access to the exact data needed for analysis and interpretation. Access is greatly simplified when all the data is in one place.  Centralized data, when organized by good metadata and indexing, also makes it easier to integrate disparate data for analytics or machine learning.

A good industry example is the Open Subsurface Data Universe™ (OSDU) Forum, supported by nearly 200 operators and service companies. OSDU was formed to establish an open subsurface data model and reference architecture with implementations for all the major cloud service providers. The OSDU promotes application standards (APIs) to ensure that all applications, developed by various parties, can run on any OSDU data platform. The goal of OSDU is to deliver the same value and services while running on different cloud service providers and in different data centers.

The new architecture and data model standards can then be completed with an application layer that will enable all your data to be accessible from a single place, make data searchable and discoverable, provide tools to integrate domain expert workflows and leverage AI/ML models, and deliver a collaborative work environment with advanced visualization to quickly share information between users and drive better decisions.

User Adoption Is Critical to a Successful Digital Transformation 

To fully benefit from digital transformation, companies must change their way of working. They have to embrace new concepts such as remote workforces and virtual teams – where collaboration between teams is a key to successful projects. Users need to be fully onboarded to understand the functionality and benefits of the new digital processes, and the reasons behind the change. Rethinking software is also a must. Applications must be web-aware and mobile-responsive. And as more millennials come into the workforce, they bring with them an expectation that enterprise applications/systems embed domain expertise and behave as intuitively as the applications they have become used to on their tablets and smartphones.

Strong User Adoption Requires Advanced Visualization and Collaboration 

In order to be effective, digital transformation must be holistic and integrate as much of the workflow as possible.  Productive collaboration among exploration and production teams is too often prevented by a lack of effectively integrated visualization of subsurface data.  This can be mitigated by solutions that enable all your data to be accessible from a single place, searchable, and deliver a collaborative work environment with advanced visualization to quickly share information between users and drive better decisions.  Furthermore, these new solutions can integrate expertise from multiple domains and bring consistency to their workflows by using common visualization components across all their applications—thereby creating a superior single app experience. 

You cannot empower collaboration and improve operational efficiency if you still operate in silos.  The ultimate goal would be a single system, with a mechanism to aggregate and integrate data, built from as many common shared components as possible.  These commonly shared components may come from different local systems or perhaps different vendors.


Filed Under: IVAAP Tagged With: data visualization, digital transformation, oil and gas, subsurface data visualization

Nov 20 2020

A New Era in O&G: Critical Components of Bringing Subsurface Data to the Cloud

The oil and gas industry is historically one of the first industries generating actionable data in the modern sense. For example, the first seismic imaging was done in 1932 by John Karcher.

 

first-seismic
Seismic dataset in 1932.

 

Since that first primitive image, seismic data has been digitized and has grown exponentially in size. It is usually represented in monolith data sets which may span in size from a couple of gigabytes to petabytes if pre-stack. 

seismic-faults
Seismic datasets today.

 

The long history, large amount of data, and the nature of the data pose unique challenges that often make it difficult to take advantage of advancing cloud technology. Here is a high-level overview of the challenges of working with oil and gas data and some possible solutions to help companies take advantage of the latest cloud technologies. 

Problems with Current Data Management Systems

Oil and Gas companies are truly global companies, and the data is often distributed among multiple disconnected systems in multiple locations. This not only makes it difficult to find and retrieve data when necessary but also makes it difficult to know what data is available and how useful it is. This often requires person-to-person communication, and some data may even be in offline systems or on someone’s desk.

The glue between those systems is data managers who are amazing at what they do but still introduce a human factor to the process. They have to understand which dataset is being requested, then search for it on various systems, and finally deliver it to the original requester. How much does this process take? You guessed it—way too much! And in the end, the requester may realize that it’s not the data they were hoping to get, and the whole process is back to square one.

After the interpretation and exploration process, decisions are usually made on the basis of data screenshots and cherry-picked views, which limit the ability of specialists to make informed decisions. Making bad decisions based on incomplete or limited data can be very expensive. This problem would not exist if the data was easily accessible in real-time. 

And that doesn’t even factor in collaboration between teams and countries. 

How can O&G and service companies manage
their massive subsurface datasets better
by leveraging modern cloud technologies?

3 Key Components of Subsurface Data Lake Implementation

There are three critical components of a successful subsurface data lake implementation: a strong cloud infrastructure, a common data standard, and robust analysis and visualization capabilities. 

 

3-key-components

 

AWS: Massive Cloud Architecture

While IVAAP is compatible with any cloud provider—along with on-premise and hybrid installations—AWS offers a strong distributed cloud infrastructure, reliable storage, compute, and more than 150 other services to empower cloud workflows. 

OSDU: Standardizing Data for the Cloud

The OSDU Forum is an Energy Industry Forum formed to establish an open subsurface Reference Architecture, including a cloud-native subsurface data platform reference architecture, with usable implementations for major cloud providers. It includes Application Standards (APIs) to ensure that all applications (microservices), developed by various parties, can run on any OSDU data platform, and it leverages Industry Data Standards for frictionless integration and data access. The goal of OSDU is to bring all existing formats and standards under one umbrella which can be used by everyone, while still supporting legacy applications and workflows. 

IVAAP: Empowering Data Visualization

A data visualization and analysis platform such as IVAAP, which is the third key component to a successful data lake implementation, provides industry-leading tools for data discovery, visualization, and collaboration. IVAAP also offers integrations with various Machine Learning and artificial intelligence workflows, enabling novel ways of working with data in the cloud.

ivaap-benefits

 

Modern Visualization — The Front End to Your Data

To visualize seismic data, as well as other types of data, in the cloud, INT has developed a native web visualization platform called IVAAP. IVAAP consists of a front-end client application as well as a backend. The backend takes care of accessing, reading, and preparing data for visualization. The client application provides a set of widgets and UI components empowering search, visualization, and collaboration for its users. The data reading and other low-level functions are abstracted from the client by a Domain API, and work through connector microservices on the backend. To provide support for a new data type, you only need to create a new connector. Both parts provide an SDK for developers, and some other perks as well. 

Compute Close to Your Data

Once the data is in the cloud, a variety of services become available. For example, one of them is ElasticSearch from AWS, which helps index the data and provides a search interface. Another service that becomes available is AWS EC2, which provides compute resources that are as distributed as the data is. That’s where IVAAP gets installed.

One of the cloud computing principles is that data has a lot of gravity and all the computing parts tend to get closer to it. This means that it is better to place the processing computer as close to the data as possible. With AWS EC2, we at INT can place our back end very close to the data, regardless of where it is in the world, minimizing latency for the user and enabling on-demand access. Elastic compute resources also enable us to scale up when the usage increases and down when fewer users are active.

 

AWS-INT

All of this works together to make your data on-demand—when the data needs to be presented, all the tools and technologies mentioned above come into play, visualizing the necessary data in minutes, or even seconds, with IVAAP dashboards and templates. And of course, the entire setup is secure on every level. 

Empower Search and Discovery

The next step is to make use of this data. And to do so, we need to provide users a way to discover it. What should be made searchable, how to set up a search, and how to expose the search to the users? 

Since searching through numerical values of the data won’t provide a lot of discovery potential, we need some additional metadata. This metadata is extracted along with the data and also uploaded to the cloud. All of it or a subset of metadata is then indexed using AWS Elasticsearch. IVAAP uses an Elasticsearch connector to the search, as well as tools to invoke the search through an interactive map interface or filter forms presented to the user.

How can you optimize web performance of massive domain datasets?

Visualizing Seismic Datasets on the Web

There are two very different approaches to visualizing data. One is to do it on the server and send rendered images to the client. This process lacks interactivity, which limits the decisions that can be made from those views. The other option is to send data to the client and visualize it on the user’s machine. IVAAP implements either approach. 

While the preferred method—sending data to the client’s machine—provides limitless interactivity and responsiveness of the visuals, it also poses a special challenge: the data is just too big. Transferring terabytes of data from the server to the user would mean serious problems. So how do we solve this challenge? 

First, it is important to understand that not all the data is always visible. We can calculate which part of the data is visible on the user’s screen at any given moment and only request that part. Some of the newer data formats are designed to operate with such reads and provide ways to do chunk reads out of the box. A lot of legacy data formats—for example, SEG-Y—are often unstructured. To properly calculate and read the location of the desired chunk, we need to first have a map—called an Index—that is used to calculate the offset and the size of chunks to be read. Even then, the data might still be too large. 

Luckily, we don’t always need the whole resolution. If a user’s screen is 3,000 pixels wide, they won’t be able to display all 6,000 traces, so we can then adaptively decrease the number of traces to provide for optimal performance.

reduce-pixels

Often the chunks which we read are in different places in the file, making it necessary to do multiple reads at the same time. Luckily, both S3 storage and IVAAP support such behavior. We can fire off thousands of requests in parallel, maximizing the efficiency of the network. Live it to the full, as some people like to say. And even then, once the traces are picked and ready to ship, we do some vectorized compression before shipping the data to the client. 

We were talking about legacy file formats here, but it’s worth mentioning that GPU compression is also available for newer file formats like VDS/OpenVDS and ZGY/OpenZGY. It’s worth mentioning that the newer formats provide perks like brick storage format, random access patterns, adaptive level of detail, and more.

Once the data reaches the client, JavaScript and Web Assembly technologies come together to decompress the data. The data is then presented to the user using the same technologies through some beautiful widgets, providing interactivity and a lot of control. From there, building a dashboard—drilling, production monitoring, exploration, etc.—with live data takes minutes.

All the mentioned processes are automated and require minimal human management. With all the work mentioned above, we enable a user to search for the data of interest, add it to desired visualization widgets (multiple are available for each type of data), and display on their screen with a set of interactive tools to manipulate the visuals. All within minutes, and while being in their home office. 

That’s not all—a user can save the visualizations and data states into a dashboard and share it with their colleagues sitting on a different continent, who can then open the exact same view in a matter of minutes. With more teams working remotely, this seamless collaboration helps facilitate collaboration and reduce data redundancy and errors. 

dash

Data Security

How do we keep this data secure? There are two layers of authentication and authorization implemented in such a system. First, AWS S3 has identity-based access guarantees that data can be visible to only authorized requests. IVAAP uses OAuth2 integrated with AWS Cognito to authenticate the user and authorize the requests. The user logs into the application and gets a couple of tokens that allow them to communicate with IVAAP services. The client passes tokens back to the IVAAP server. In the back end, IVAAP validates the same tokens with AWS Cognito whenever data reads need to happen. When validated, a new, temporary signed access token is issued by S3, which IVAAP uses to make the read from the file in a bucket.

Takeaways

Moving to the cloud isn’t a very simple task and poses a lot of challenges. By using technology provided by AWS and INT’s IVAAP and underlined by OSDU data standardization, we can create a low-latency data QC and visualization system which puts all the data into one place, provides tools to search for data of interest, enables real-time on-demand access to the data from any location with the Internet, and does all that in a secure manner.

For more information on IVAAP, please visit int.com/ivaap/ or to learn more about how INT works with AWS to facilitate subsurface data visualization, check out our webinar, “A New Era in O&G: Critical Components of Bringing Subsurface Data to the Cloud.”


Filed Under: IVAAP Tagged With: AWS, cloud, data visualization, digital transformation, ivaap, subsurface data visualization

Footer

Solutions

  • For E&P
  • For OSDU Visualization
  • For Cloud Partners
  • For Machine Learning
  • For CCUS
  • For Geothermal Energy
  • For Wind Energy
  • For Enterprise
  • Tools for Developers
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2023 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2023 INTERACTIVE NETWORK TECHNOLOGIES, Inc