• Skip to main content
  • Skip to footer

INT

Empowering Visualization

CONTACT US SUPPORT
MENUMENU
  • Products
    • Overview
    • IVAAP™
    • INTViewer™
    • GeoToolkit™
    • Product Overview
  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Solutions
    • Overview
    • For E&P
    • For OSDU Visualization
    • For Cloud Partners
    • For Machine Learning
    • For CCUS
    • For Geothermal Energy
    • For Wind Energy
    • For Enterprise
    • Tools for Developers
    • Services Overview
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

Uncategorized

Apr 15 2019

TotalEnergies to Use INT’s Data Visualization and Analysis Platform and Libraries Software for the Next Five Years

TotalEnergies and INT have recently announced a long-term corporate agreement that will give TotalEnergies access to INT’s GeoToolkit, the most widely adopted JavaScript-based data visualization technology software in Oil and Gas.

TotalEnergies will also be able to take advantage of IVAAP, one of the leading Data Visualization software platforms for digital subsurface projects deployed in the web or private cloud.

“With the growth of Big Data and IoT, the E&P industry needed a solution that would empower companies to combine and utilize vast amounts of incredibly useful, yet disparate domain data easily, in one powerful software,” said Dr. Olivier Lhemann, founder and CEO, Interactive Network Technologies.

“Fortunately, our unique expertise and position in the industry allowed us to recognize and respond to this need quickly, so we developed IVAAP. Now, we’re proud to partner with TotalEnergies to empower domain experts with the right digital tools they need to gain valuable, timely insights from their data.”

With this agreement, TotalEnergies Exploration & Production will gain access to GeoToolkit and IVAAP’s fully extensible platform, cloud-based architecture, and comprehensive set of data connectors to current systems such as WITSML, PPDM, OSIsoft, PI, and many others.

For more information on INT’s products and services, visit our products page or email us to discuss how we can help you visualize your upstream data.

View the press release

Learn more about INT’s products

 

Filed Under: GeoToolkit, Uncategorized Tagged With: geotoolkit, ivaap, TotalEnergies

Feb 27 2019

How to Empower Developers with à la Carte Deployment in IVAAP Upstream Data Visualization Platform

When you get started with IVAAP’s backend SDK, the first API that you will probably encounter is its “Lookup” system. A lookup system is a basic component of a pluggable architecture. Within such architecture, when a program needs to perform an action, it is not aware of the specifics of this action’s implementation, it just knows how to find this implementation and execute it. There are many benefits to separating service definition from implementation. A program might have one default implementation that is overridden by a plugin. Clients can customize an application’s behavior without having access to the code of this application. “Looking up” the concrete implementation of a service is an effective way to propose options without cluttering the code with “if” statements that you need to change each time a new option is offered. IVAAP was not just meant to be a web application “built for purpose”—we wanted it to be a platform that customers can extend on their own. With this goal is mind, the first component that we picked for IVAAP’s architecture was a “lookup” system.

The Java language has a standard way of performing such dependency injections. Java’s ServiceLoader class is central to this mechanism but it is a bit outdated and maintenance-heavy. To plug classes for a ServiceLoader, you need to edit a separate META-INF/services text file. This file contains the name of the class you want to plug. It doesn’t offer protections against typos and if a class name changes, the injection breaks unless you remember to update this service file. This design violates the concept that “what changes together should belong together.”

Unlike the ServiceLoader, IVAAP uses Java annotations to register classes into its lookup. These annotations belong in same class they register and they don’t break when that class name changes. For example, here is how the built-in entitlements controller is registered:

@SelfRegistration(lookupClass = AbstractEntitlementsController.class, position = 50)
public class DefaultEntitlementsController extends AbstractEntitlementsController {

The IVAAP SDK also has the option to perform this registration programmatically. This is the equivalent registration using code instead of annotations:

Lookup.Factory.getInstance().register(AbstractEntitlementsController.class, new DefaultEntitlementsController(), 50);

In many ways, this registration is very similar to what ServiceLoaders require. You can unregister classes, too. For example, here is how a customer would override how IVAAP controls entitlements:

@ClassUnregistration(lookupClass = AbstractEntitlementsController.class, registeredClass = DefaultEntitlementsController.class)
@SelfRegistration(lookupClass = AbstractEntitlementsController.class, position = 100)
public class MyEntitlementsController extends AbstractEntitlementsController {

This is the equivalent registration using code instead of annotations:

Lookup.Factory.getInstance().unregisterClass(AbstractEntitlementsController.class, DefaultEntitlementsController.class);
Lookup.Factory.getInstance().register(AbstractEntitlementsController.class, new MyEntitlementsController(), 100);

Each registration has a position. This is mostly useful when several classes of the same type need to be registered in the lookup. By setting the position attribute, you can customize which class will be found first when the content of the lookup is inspected. In other words, the position controls the order in which all “if” statements will be executed. It also has performance tuning use cases. For example, there are many service handler classes registered, each one representing a microservice. You can decide the order in which they will be matched to a URL, optimizing for the most frequently used ones.

When annotations became part of the Java ecosystem, they were widely adopted as an alternative to XML configuration files. This approach has sometimes been overused, resulting in multiple types of annotations, carrying numerous attributes, becoming just as indecipherable as the XML configuration files they were attempting to replace. The IVAAP backend avoids this pitfall by using the same annotation across all option types, which means is only one set of annotations to learn for a programmer extending the platform.

Another nice feature of IVAAP’s lookup system occurs at startup. When the classpath is inspected for lookup annotations, the classes found are logged. There are 300 modules in IVAAP, and no two customers pick the same options. When troubleshooting is needed, these logs make an unambiguous way to learn which options are actually in play for a specific deployment.

Inspecting jar files for lookup registrations at startup takes time. For performance reasons, you might elect to ignore jars that are known to be registration-free. You typically exclude external libraries by adding lookup.ignore configuration files along with your jars. These files use regex expressions to exclude jars by name. You also have the option to set environment variables to achieve the same result. The later method is actually quite useful in the context of Docker deployments. This gives devops the option to create one Docker instance with all jars of the platform and customize how this instance behaves just by setting environment variables—you can reuse the same Docker image in multiple deployment contexts.

The Java ecosystem has many dependency injection libraries available. They tend to require configuration and many keystrokes, essentially interrupting the developer while coding is underway. IVAAP’s backend doesn’t just propose an easy way to customize how it behaves, it also proposes an unobtrusive way to create pluggable behaviors while developers are working. Actually, the simplicity of the lookup system is the reason why so many aspects of IVAAP are pluggable. When this system is introduced to new developers, they enjoy that it’s easy to learn, yet versatile. Devops appreciate that they can leverage it when a fast turnaround is needed.

Visit our products page for more information about IVAAP or contact us for a demo.

Filed Under: Uncategorized Tagged With: ivaap, lookup, pluggable

Apr 19 2018

Enterprise Data Visualization: A Critical Component of Your E&P Digital Platform

The Forces Driving Transformation

Traditionally, the oil and gas sector has been slower than other industries to adopt new software and technology. When the market is doing well, companies have little time or motivation to invest in change. When the market is bad, companies lack the appropriate resources and budget.

Since 2015, the oil and gas sector has experienced significant changes. An aging workforce, low oil market price, and the pressure to transition to cleaner energy are among the many factors forcing the Exploration and Production industry to rethink how it works, recruits, trains, and operates in order to stay in business.

As a result of the many factors at play in today’s market, the pressure to increase or restore profitability and expand operations is unavoidable. Many companies are choosing to proactively adopt and integrate new technology to help them respond to this pressure to evolve.

Digital Platform Building Blocks

The term “digital transformation” is widely used to describe the integration of technology into all areas of a business, effectively digitizing operations to provide better access to data and decision-leveraging systems in the cloud.

As part of the digital transformation process, data science and machine learning can be smart ways for companies to focus more on analysis and to automate operations that can be expensive and tedious to perform manually.

Whether built internally or purchased from a software provider, many of the digital platforms used in the industry integrate each company’s proprietary science and workflows. While each platform may be unique, they do share a set of common functionalities, such as:

  • Cloud Data Lake
  • Data Mining
  • Machine Learning
  • Cluster Analysis
  • Databases
  • Streaming of Data Sources
  • Cloud Storage
  • Visualization

Unfortunately, the focus of selecting and/or developing a digital platform is often on the data science aspect, leaving user interaction and visualization as an afterthought, often overlooked until late in the development lifecycle.

Cross-Domain Visualizations

Looking at some of the key functions and workflows in E&P from a high level, it is clear that many of the use cases and workflows share common data views and data sources and would benefit from a platform that integrates them all.

Majors and IOCs should consider these shared use cases when designing or choosing the user experience of their digital platform as they require a visualization platform technology that is modular enough to tailor the user experience and workflows specific to each stage of the lifecycle (see Fig.1).

Cross-domain visualization
Fig.1 – Key areas where cross-domain visualization technology is needed to consume data and make decisions.

 

How to Evaluate an Enterprise Cloud Viewer

A product owner, architect, or chief engineer/developer who wants to build or implement a digital solution must assemble and assess various pieces of the puzzle: machine learning engine, database, cloud infrastructure, data search, search, workflows, etc.

In order to avoid reinventing the wheel, many companies choose a cloud-friendly enterprise viewer platform that can be used out of the box. However, there are many factors to consider when evaluating an enterprise cloud viewer.

The Number One Cause of Cloud Solution Failure: Lack of User Adoption

To avoid adoption failure, an Enterprise Cloud Viewer must offer a consistent and unified user experience across workflows through a single interface that can be adapted as needed, depending on the workflow and the user profile. Ideally, a solution built using a user-centered design approach will provide contextual experience for both remote and on-site experience.

Cloud Viewer Technology Adoption: Other Considerations

This list of points to consider when adopting a new cloud viewer platform is not intended to be exhaustive, but it is a great start for an evaluation or as a requirement list for an RFP for an E&P Cloud Visualization Framework:

E&P enterprise web viewer
Considerations for choosing an E&P enterprise cloud viewer.

 

1. Advanced Data Visualization

The visualization framework should offer specific domain views: WellLog, Schematics, BHA (animated or not), Seismic, and 4D/3D/2D in web-based local and/or remote visualization service. It should also have the ability to display very large datasets with high performance, seamlessly leveraging compression/decompression and decimation algorithms.

  • G&G/Seismic — Support standard file formats in E&P, such as SEG-D, SEG-Y, DLIS, and LAS in 2D/3D and custom formats used in specific workflows.
    • Visualize Faults: Simple geometry, complex geometry, fault surface interpolation
    • Visualize GeoModel: 3D layer representation with horizon, faults
    • 3D volumetric rendering
  • Drilling / Drilling Monitoring — Display well log and deviated well log data to perform geosteering (GST). Display surface sensor data (torque, hook load, pressure, depth, pump strokes), set custom alarms—swab & surge, vibration, ROP, WOB, RPM
    • Display well data with seismic (overlay)
    • Well log visualization (plot multiple curves, view markers, line displayed, colored display)
    • Offset wells display for correlation with the active well.
    • Display of drilling analytics to optimize rig activity (OPEX)
    • Directional drilling — plan vs. actual (3D trajectory in RT, BHA Schematics )
    • Composite log
    • NPT visualization
    • Geomechanics (rose diagram)
    • WellLog – providing display of curves and sensor data.
    • Multi-well view supported in the same browser session
  • Logging (wireline & while drilling) — Visualize petrophysical data
  • Completion — Display logs. visualize fracking data, run casing monitoring, view well schematics (plan vs. actual)
    • Time and depth (MD/TVD) based data, ability to switch indexes, image log support, passes/run, etc., from surface to sensor depth
  • Production — Monitor multiple individual well performance and well parameters, provide alarm messages for abnormal conditions, real-time reading, trending capabilities, KPIs reporting capabilities

2. Intuitive Visual Exploration

To ensure user adoption, the user interface must be intuitive. It should enable the exploration of data via the manipulation of chart images, with the color, brightness, size, shape, and motion of visual objects representing aspects of the dataset being analyzed. The tool should enable users to analyze the data by interacting directly with a visual representation of it.

  • Versatile User Interface/User Experience (UI/UX) with user-configurable preferences and collaborative functions. An intuitive UI/UX requiring minimal training of internal/external customers.
  • HTML5/JavaScript to support mobile-responsive needs, including non-graphical data that can be displayed on smaller devices.
  • Support touch screen (for mobile, tablet, touch screen monitors, etc.)
  • Map views for navigation and selection of data

3. Embedded Analytics

Users should be able to easily access advanced analytics capabilities contained within the platform or through the import and integration of externally developed models.

4. Interactive Dashboards

The tool should allow users to create highly interactive dashboards and content with visual exploration and be able to conduct search-based discovery.

  • Reusable Visualization and Dashboard Templates
  • Content Authoring
  • Animation and Playback
  • Formatting and Layout

5. Publish, Share, and Collaborate Capabilities

Users need to be able to publish, deploy, and operationalize visualizations. To collaborate more efficiently, users need the ability to share, discuss, and track information, analysis, analytic content, and decisions (embedded dashboard link, PDF printing, chat, and annotations).

6. Scalable Architecture

Modular architecture using microservices should integrate seamlessly with customer or third-party data science or workflow.

  • Resilient and fault-tolerant
  • Wherever possible, the system should be constructed from distributed redundant components transparent to end users.
  • Readily supports all global and standalone deployment scenarios (in-country / in-customer / in-cloud deployments)
  • Modularization of the system that should be composed of multiple services with transparent or industry-standard interfaces or services such as WITSML.
  • Developers should be able to develop their own components that simply plug into the existing viewer infrastructure. The messaging system should be open.
  • There should be an API to subscribe and publish to messages inside the service infrastructure.
  • SDK (client, framework)
    • The SDK should be able to provide developers with the ability to simply develop and build new display widgets or extend existing ones.

7. Platform Capabilities

The capabilities are offered in a single, seamless product or across multiple products.

  • Playback for all records of the well (variable speed, pause, and rewind)
  • Multiple language and character sets support
  • Data versioning for analysis and retrieval
  • Data storage/history
  • Global mnemonics, unit types, unit set, unit conversion
  • Acquisition real-time status indication
  • Client connection status indication
  • Coordinate reference transformation capability
  • Alerts and alarms (critical/notifications) with rapid (near real-time) delivery via email/SMS or out-of-band support
  • Data latency — the data should be delivered in near-real-time from the source of where the data is acquired by the system to the end-user display.
  • Programming Interfaces / API Integration (WITSML, PRODML, and WITS Applications)
  • Unit-of-Measure Conversion
  • Data Source Connectivity – Aggregation of structured and unstructured data contained within various types of storage platforms, both on-premises and in the cloud:
    • WITSML, ProdML, ResqML, WITS, PPDM, OPC UA, OSIsoft PI, SQL DB, NoSQL DB…
  • Data Export in CSV, LAS, ASCII
  • Analytics (KPI) reporting via custom dashboards or built-in widgets
  • Math engine / expression-based math solver — input formula and calculation — for instance, calculate pore pressure, MSE (Mechanical Specific Energy)
  • Data writes back to database
  • Annotations
  • Ability to switch/toggle between screens

8. Infrastructure, Administration, Security

Capabilities that enable platform security, administering users, auditing platform access and utilization, optimizing performance and ensuring high availability and disaster recovery.

  • The service incorporates a comprehensive entitlements system allowing user access to be managed down to the individual curve level
  • User access, usage and operating metrics are monitored and recorded for support, security, performance, and activity auditing and reporting
  • Role-based security be based on user groups and roles
  • Single Sign-on (Authentication and Authorization)
  • User administration
  • Full audit functionality, usage monitoring
  • Vulnerability
  • Encryption
  • High availability and disaster recovery
  • Scalability and performance

While these may not be all of the features to consider, these should provide a solid foundation for any company that wants to evaluate an enterprise cloud viewer for the E&P industry.

For more information about our enterprise data visualization solutions, visit the IVAAP product page, or contact us.


Filed Under: Uncategorized

Dec 01 2017

How to Improve Performance and Reduce Latency of Your Geoscience Data

Storing and accessing large, sometimes sensitive geoscience data is one challenge many top E&P companies face.

Local access is great, but not every user in the world can have local access to the same data. Replication is an option, but with the size of seismic datasets reaching terabytes, this is not practical. In the real world, users only have access to a limited set of local data.

Common Solution Leads to Performance Issues

Many companies store data all over the world. The common infrastructure to allow ubiquitous access to data is to share these files using NFS, a well-known distributed file system protocol used by Linux-based servers.

The issue with NFS is that it is a “chatty” protocol: Many messages are sent back and forth between the client and the server. This is fine when all machines are physically close to each other, but the further away they get, the more latency you introduce. As a result, performance degrades.

NFS is also essentially transparent to the software using it. Some software, like INTViewer, doesn’t “know” that your data is remote, so it can’t optimize its data fetching strategy to the characteristics of your infrastructure. Actually, for seismic data, it assumes that access to individual traces is fast.

A Better Option

This is where INTGeoServer comes into play. Access to data hosted on INTGeoServer—a server with a modern architecture that uses web services to stream geoscience data—is optimized so that there is a limited number of back-and-forth messages. In other words, by installing INTGeoServer next to your data, you make this data accessible from remote places as if it were local.

INTGeoServerNFSGraphic2

To visualize any geoscience file in INTViewer, simply drag and drop that file from the file system to INTViewer’s main window and its content appears automatically. From this experience, it might seem that INTViewer is tied to the file system where it resides, meaning it can only read data from that file system. While this is a common use case, using INTGeoServer removes the requirement to have INTViewer and your data on the same file system.

From an INTViewer user point-of-view, the protocol used to access the data doesn’t change the interaction—the visualizations are the same, the analysis tools work the same way. From a system administrator point of view, however, the burden of maintaining worldwide NFS shares is lifted. And the benefit of accessing that data in larger chunks is that the performance profile improves substantially.

For more information about INTViewer and INTGeoServer, visit the INTViewer product page, or contact us for a free trial.


Filed Under: Uncategorized Tagged With: cloud, data storage, INTGeoServer, INTViewer

Nov 01 2017

Bridging the Gap Between Business and IT: Visualization Architecture in the Digital Oilfield

A Closer Look at IVAAP

Thierry Danard, VP of Core Platform Technologies

In our latest Tech Talk, E&P Visualization in the Cloud, we featured IVAAP, our cloud-enabled visualization and analytics development platform. We showed how it can be used to monitor and analyze well data as a critical part of your digital transformation.

Thierry Danard, our VP of Core Platform Technologies, presented some of the technical aspects of IVAAP, so we asked him a few questions after the talk to dig a bit deeper:

> Hi, Thierry! We already know that you are the brains behind INTViewer, so which part of IVAAP are you responsible for?

I mostly work on the “P” part of IVAAP, the “platform.” IVAAP can be customized fully, both on the browser side and on the backend side. I focus on the backend side, meaning the microservices on the data side.

> What makes the IVAAP platform unique?

IVAAP comes with a Software Development Kit (SDK) so geoscience developers can tailor our solution to their needs. Developing solutions for the cloud is hard. We want to facilitate the work of these developers. The SDK is designed to ease the challenges developers face when developing distributed solutions.

But developers are not the only customers of IVAAP within IT. Deploying cloud solutions is also hard, and infrastructure folks want options when it comes to deployment. We made the IVAAP platform container-agnostic so that it can be deployed in a highly distributed environment or using standalone servers without changes: This is the same microservice code running.

IVAAP is unique because it bridges the gap between the business and IT: It provides a common platform that both sides can embrace, not just end-users.

 

SDK-architecture-ivaap
SDK Architecture

 

> Can you give us examples of containers that IVAAP works on?

The most widely used container for the IVAAP backend is Play. This is a high-velocity web framework designed to run on multiple machines, in a distributed fashion.

Another one is Apache Tomcat, the most widely used standalone Java application server. Other well-known JavaEE application servers are Oracle Glassfish and WebLogic.

> Why might a developer choose Tomcat over Play?

Not every customer has a network of machines to dedicate to well monitoring or analysis. Depending on what you use IVAAP for, you might not need distributed processing.

But developers also benefit. Developers can use the Integrated Development Environment (IDE) that they already use; it already works with Tomcat. No need to use a special environment, no need to install special plugins or to configure several servers. Developers can be productive from day one. The promise of IVAAP is to accelerate the delivery of geoscience, drilling and production cloud-enabled solutions. You can’t accelerate these deliveries unless your developers are productive.

The IVAAP Approach

 

> How does the SDK help developers create distributed microservices?

The IVAAP backend API makes a large use of the Akka library. Akka is a toolkit for building highly concurrent, distributed applications. The core Java programming model makes it very difficult for cloud developers to implement distributed processing. The Akka library addresses this concern with its simple model based on actors and messages.

Akka and Play are designed to work together. When Akka code is deployed in Play, you can sustain heavy loads. For example, the Akka actor system might decide to delegate individual processing units to one or several machines. This is virtually transparent to the developer as this is a behavior that depends on the state of each server.

> How does the SDK help developers create efficient microservices?

The API of the SDK is designed from the ground up to favor asynchronous execution over synchronous execution.

Synchronous code tends to reserve lots of resources just to wait for an answer. Asynchronous code doesn’t reserve these resources while a long processing task is being performed. Less CPU and less memory usage means more processing power for each deployed server, allowing your solution to perform under heavy loads.

> What’s coming next for the IVAAP backend?

Now that we made it easy to add new data sources and new microservices, we are adding connectivity to even more data repositories, such as OSISoft PI, Procount, or Peloton. This is a typical use case of the backend API. We have cleanly separated the microservices and data access parts. Now it’s just a matter of plugging additional data sources.


Stay tuned for more interviews with our developers! In the meantime, click here to learn more about IVAAP.


Filed Under: Uncategorized Tagged With: Azure, ivaap, Microsoft

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Solutions

  • For E&P
  • For OSDU Visualization
  • For Cloud Partners
  • For Machine Learning
  • For CCUS
  • For Geothermal Energy
  • For Wind Energy
  • For Enterprise
  • Tools for Developers
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2023 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2023 INTERACTIVE NETWORK TECHNOLOGIES, Inc