• Skip to main content
  • Skip to footer

INT

Empowering Visualization

CONTACT US SUPPORT
MENUMENU
  • Products
    • Overview
    • IVAAP™
    • INTViewer™
    • GeoToolkit™
    • Product Overview
  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Solutions
    • Overview
    • For E&P
    • For OSDU Visualization
    • For Cloud Partners
    • For Machine Learning
    • For CCUS
    • For Geothermal Energy
    • For Wind Energy
    • For Enterprise
    • Tools for Developers
    • Services Overview
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

cloud

Mar 24 2021

INT Supports The Open Group OSDU™ Forum Mercury Release with Advanced Domain Data Visualization in the Cloud

As a long-standing OSDU Forum Member, INT has worked closely with the OSDU development teams to ensure seamless integration of IVAAP visualization of OSDU data on all major cloud providers.

Houston, TX – March 24, 2021 – INT is pleased to announce our partnership with The Open Group OSDU™ Forum as part of the new Mercury Release. INT’s flagship data visualization platform, IVAAP, offers a unique way for operators to search, explore, interact with, and automate their data on OSDU in a single platform in the cloud. 

Developed by The Open Group OSDU™ Forum, the OSDU Data Platform is an Open Source, standards-based and technology-agnostic data platform for the energy industry that stimulates innovation, industrializes data management, and reduces time to market for new solutions.

For companies adopting OSDU, IVAAP is a powerful, fast, and cost-effective alternative to custom building an application or assembling multiple components to visualize domain data. INT partners with all major cloud providers that support OSDU — AWS, Microsoft Azure, Google Cloud Platform, and RedHat OpenShift by IBM. And it includes multiple customization options, including an SDK to provide a complete, end-to-end visualization solution.


Olivier Lhemann, founder and president of INT, explains: “As more energy companies transition their data and workflows to the cloud, it’s more important than ever to have a common data standard. Our work with OSDU is critical to helping companies solve the challenge of interoperability, of viewing their data from a single application, eliminating silos and liberating workflows. IVAAP is a universal cloud viewer that significantly reduces time to market and accelerates the adoption of innovative technologies.”


To learn more about IVAAP and how it works with OSDU, visit INT.com/IVAAP.

Read the full press release on PRWeb.

About INT:

INT software empowers the energy companies to visualize their complex data (seismic, well log, reservoir, and schematics in 2D/3D). INT offers a visualization platform (IVAAP) and libraries (GeoToolkit) that developers can use with their data ecosystem to deliver subsurface solutions (Exploration, Drilling, Production). INT’s powerful HTML5/JavaScript technology can be used for data aggregation, API services, high-performance visualization of G&G and petrophysical data in a browser. INT simplifies complex subsurface data visualization.

About The Open Group

The Open Group is a global consortium that enables the achievement of business objectives through technology standards. Our diverse membership of more than 800 organizations includes customers, systems and solutions suppliers, tool vendors, integrators, academics, and consultants across multiple industries. For more information, visit www.opengroup.org.

INT, the INT logo, and IVAAP are trademarks of Interactive Network Technologies, Inc., in the United States and/or other countries.

 

Open Subsurface Data Universe™ and OSDU™ are trademarks of The Open Group.

Filed Under: IVAAP, Press Release Tagged With: AWS, cloud, IBM, ivaap, Microsoft, open group, OSDU

Jan 12 2021

Comparing Storage APIs from Amazon, Microsoft and Google Clouds

One of the unique capabilities of IVAAP is that it works with the cloud infrastructure of multiple vendors. Whether your SEGY file is posted on Microsoft Azure Blob Storage, Amazon S3 or Google Cloud Storage, IVAAP will be capable of visualizing it.

It’s only when administrators register new connectors that vendor-specific details need to be entered.  For all other users, the user interface will be identical regardless of the data source. The REST API consumed by IVAAP’s HTML5 client is common to all connectors as well. The key component that does the hard work of “speaking the language of each cloud vendor and hiding their details to the other components” is the IVAAP Data Backend.

While the concept of “storage in the cloud” is similar across all three vendors, they each provide a different API to achieve similar goals. In this article, we will compare how to implement 4 basic functionalities. Because the IVAAP Data Backend is written in Java, we’ll only compare Java APIs.

 

Checking that an Object or Blob Exists

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
try {
    HeadObjectRequest.Builder builder = HeadObjectRequest.builder().bucket(bucketName).key(keyName);
    s3Client.headObject(request);
    return true;
} catch (NoSuchKeyException e) {
    return false;
}

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder().endpoint(endpoint).credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
BlobClient blobClient = containerClient.getBlobClient(blobName);
return blob.exists();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.getBlob(bucketName, blobName, BlobGetOption.fields(BlobField.ID));
return blob.exists();

 

Getting the Last Modification Date of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
HeadObjectResponse headObjectResponse = s3Client.headObject(headObjectRequest);
return headObjectResponse.lastModified();

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);            BlobProperties properties = blob.getProperties();
return properties.getLastModified();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(Storage.BlobField.UPDATED));
return blob.getUpdateTime();

 

Getting an Input Stream out of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
return s3Client.getObject(getObjectRequest);

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);
return blob.openInputStream();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(BlobField.values()));
return Channels.newInputStream(blob.reader());

 

Listing the Objects in a Bucket or Container While Taking into Account Folder Hierarchies

S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String parentFolderPath = ...
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
ListObjectsV2Request.Builder builder = ListObjectsV2Request.builder().bucket(bucketName).delimiter("/").prefix(parentFolderPath + "/");
ListObjectsV2Request request = builder.build();
ListObjectsV2Iterable paginator = s3Client.listObjectsV2Paginator(request);
Iterator<CommonPrefix> foldersIterator = paginator.commonPrefixes().iterator();
while (foldersIterator.hasNext()) {
…
}

Microsoft

String accountName = …
String accountKey = …
String containerName = …
String parentFolderPath = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
Iterable<BlobItem> iterable = containerClient.listBlobsByHierarchy(parentFolderPath + "/");
for (BlobItem currentItem : iterable) {
   …
}

Google

String authKey = …
String projectId = …
String bucketName = …
String parentFolderPath = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Page<Blob> blobs = cloudStorage.listBlobs(bucketName, BlobListOption.prefix(parentFolderPath + "/"), BlobListOption.currentDirectory());
for (Blob currentBlob : blobs.iterateAll()) {
 ...
}

 

Most developers will discover these APIs by leveraging their favorite search engine. Driven by innovation and performance, cloud APIs become obsolete quickly. Amazon was the pioneer, and much of the documentation still indexed by Google is for the v1 SDK, while the v2 has been available for more than two years, but wasn’t a complete replacement. This sometimes makes research challenging for the simplest needs. Microsoft has migrated from v8 to v12 a bit more recently and has a similar challenge to overcome. Being the most recent major player, the Google SDK is not dragged down much by obsolete articles.

The second way that developers will discover an API is by using the official documentation. I found that the Microsoft documentation is the most accessible. There is a definite feel that the Microsoft Azure documentation is treated as an important part of the product, with lots of high-quality sample code targeted at beginners.

The third way that developers discover an API is by using their IDE’s code completion. All cloud vendors make heavy use of the builder pattern. The builder pattern is a powerful way to provide options without breaking backward compatibility, but slows down the self-discovery of the API. The Amazon S3 API also stays quite close to the HTTP protocol, using terminology such as “GetRequest” and “HeadRequest”. Microsoft had a higher level API in v8 where you were manipulating blobs. The v12 iteration moved away from apparent simplicity by introducing the concept of blob clients instead. Microsoft offers a refreshing explanation of this transition. Overall, I found that the Google SDK tends to offer simpler APIs for performing simple tasks.

There are more criterias than simplicity, discoverability when comparing APIs. Versatility and performance are two of them. The Amazon S3 Java SDK is probably the most versatile because of the larger number of applications that have used its technology. It even works with S3 clones such as MinIO Object Storage (and so does IVAAP). The space where there are still a lot of changes is asynchronous APIs. Asynchronous APIs tend to offer higher scalability, faster execution, but can only be compared in specific use cases where they are actually needed. IVAAP makes heavy use of asynchronous APIs, especially to visualize seismic data. This would be the subject of another article. This is an area that evolves rapidly and would deserve a more in-depth comparison.

For more information on IVAAP, please visit www.int.com/products/ivaap/

 


Filed Under: IVAAP Tagged With: API, cloud, Google, ivaap, java, Microsoft

Nov 20 2020

A New Era in O&G: Critical Components of Bringing Subsurface Data to the Cloud

The oil and gas industry is historically one of the first industries generating actionable data in the modern sense. For example, the first seismic imaging was done in 1932 by John Karcher.

 

first-seismic
Seismic dataset in 1932.

 

Since that first primitive image, seismic data has been digitized and has grown exponentially in size. It is usually represented in monolith data sets which may span in size from a couple of gigabytes to petabytes if pre-stack. 

seismic-faults
Seismic datasets today.

 

The long history, large amount of data, and the nature of the data pose unique challenges that often make it difficult to take advantage of advancing cloud technology. Here is a high-level overview of the challenges of working with oil and gas data and some possible solutions to help companies take advantage of the latest cloud technologies. 

Problems with Current Data Management Systems

Oil and Gas companies are truly global companies, and the data is often distributed among multiple disconnected systems in multiple locations. This not only makes it difficult to find and retrieve data when necessary but also makes it difficult to know what data is available and how useful it is. This often requires person-to-person communication, and some data may even be in offline systems or on someone’s desk.

The glue between those systems is data managers who are amazing at what they do but still introduce a human factor to the process. They have to understand which dataset is being requested, then search for it on various systems, and finally deliver it to the original requester. How much does this process take? You guessed it—way too much! And in the end, the requester may realize that it’s not the data they were hoping to get, and the whole process is back to square one.

After the interpretation and exploration process, decisions are usually made on the basis of data screenshots and cherry-picked views, which limit the ability of specialists to make informed decisions. Making bad decisions based on incomplete or limited data can be very expensive. This problem would not exist if the data was easily accessible in real-time. 

And that doesn’t even factor in collaboration between teams and countries. 

How can O&G and service companies manage
their massive subsurface datasets better
by leveraging modern cloud technologies?

3 Key Components of Subsurface Data Lake Implementation

There are three critical components of a successful subsurface data lake implementation: a strong cloud infrastructure, a common data standard, and robust analysis and visualization capabilities. 

 

3-key-components

 

AWS: Massive Cloud Architecture

While IVAAP is compatible with any cloud provider—along with on-premise and hybrid installations—AWS offers a strong distributed cloud infrastructure, reliable storage, compute, and more than 150 other services to empower cloud workflows. 

OSDU: Standardizing Data for the Cloud

The OSDU Forum is an Energy Industry Forum formed to establish an open subsurface Reference Architecture, including a cloud-native subsurface data platform reference architecture, with usable implementations for major cloud providers. It includes Application Standards (APIs) to ensure that all applications (microservices), developed by various parties, can run on any OSDU data platform, and it leverages Industry Data Standards for frictionless integration and data access. The goal of OSDU is to bring all existing formats and standards under one umbrella which can be used by everyone, while still supporting legacy applications and workflows. 

IVAAP: Empowering Data Visualization

A data visualization and analysis platform such as IVAAP, which is the third key component to a successful data lake implementation, provides industry-leading tools for data discovery, visualization, and collaboration. IVAAP also offers integrations with various Machine Learning and artificial intelligence workflows, enabling novel ways of working with data in the cloud.

ivaap-benefits

 

Modern Visualization — The Front End to Your Data

To visualize seismic data, as well as other types of data, in the cloud, INT has developed a native web visualization platform called IVAAP. IVAAP consists of a front-end client application as well as a backend. The backend takes care of accessing, reading, and preparing data for visualization. The client application provides a set of widgets and UI components empowering search, visualization, and collaboration for its users. The data reading and other low-level functions are abstracted from the client by a Domain API, and work through connector microservices on the backend. To provide support for a new data type, you only need to create a new connector. Both parts provide an SDK for developers, and some other perks as well. 

Compute Close to Your Data

Once the data is in the cloud, a variety of services become available. For example, one of them is ElasticSearch from AWS, which helps index the data and provides a search interface. Another service that becomes available is AWS EC2, which provides compute resources that are as distributed as the data is. That’s where IVAAP gets installed.

One of the cloud computing principles is that data has a lot of gravity and all the computing parts tend to get closer to it. This means that it is better to place the processing computer as close to the data as possible. With AWS EC2, we at INT can place our back end very close to the data, regardless of where it is in the world, minimizing latency for the user and enabling on-demand access. Elastic compute resources also enable us to scale up when the usage increases and down when fewer users are active.

 

AWS-INT

All of this works together to make your data on-demand—when the data needs to be presented, all the tools and technologies mentioned above come into play, visualizing the necessary data in minutes, or even seconds, with IVAAP dashboards and templates. And of course, the entire setup is secure on every level. 

Empower Search and Discovery

The next step is to make use of this data. And to do so, we need to provide users a way to discover it. What should be made searchable, how to set up a search, and how to expose the search to the users? 

Since searching through numerical values of the data won’t provide a lot of discovery potential, we need some additional metadata. This metadata is extracted along with the data and also uploaded to the cloud. All of it or a subset of metadata is then indexed using AWS Elasticsearch. IVAAP uses an Elasticsearch connector to the search, as well as tools to invoke the search through an interactive map interface or filter forms presented to the user.

How can you optimize web performance of massive domain datasets?

Visualizing Seismic Datasets on the Web

There are two very different approaches to visualizing data. One is to do it on the server and send rendered images to the client. This process lacks interactivity, which limits the decisions that can be made from those views. The other option is to send data to the client and visualize it on the user’s machine. IVAAP implements either approach. 

While the preferred method—sending data to the client’s machine—provides limitless interactivity and responsiveness of the visuals, it also poses a special challenge: the data is just too big. Transferring terabytes of data from the server to the user would mean serious problems. So how do we solve this challenge? 

First, it is important to understand that not all the data is always visible. We can calculate which part of the data is visible on the user’s screen at any given moment and only request that part. Some of the newer data formats are designed to operate with such reads and provide ways to do chunk reads out of the box. A lot of legacy data formats—for example, SEG-Y—are often unstructured. To properly calculate and read the location of the desired chunk, we need to first have a map—called an Index—that is used to calculate the offset and the size of chunks to be read. Even then, the data might still be too large. 

Luckily, we don’t always need the whole resolution. If a user’s screen is 3,000 pixels wide, they won’t be able to display all 6,000 traces, so we can then adaptively decrease the number of traces to provide for optimal performance.

reduce-pixels

Often the chunks which we read are in different places in the file, making it necessary to do multiple reads at the same time. Luckily, both S3 storage and IVAAP support such behavior. We can fire off thousands of requests in parallel, maximizing the efficiency of the network. Live it to the full, as some people like to say. And even then, once the traces are picked and ready to ship, we do some vectorized compression before shipping the data to the client. 

We were talking about legacy file formats here, but it’s worth mentioning that GPU compression is also available for newer file formats like VDS/OpenVDS and ZGY/OpenZGY. It’s worth mentioning that the newer formats provide perks like brick storage format, random access patterns, adaptive level of detail, and more.

Once the data reaches the client, JavaScript and Web Assembly technologies come together to decompress the data. The data is then presented to the user using the same technologies through some beautiful widgets, providing interactivity and a lot of control. From there, building a dashboard—drilling, production monitoring, exploration, etc.—with live data takes minutes.

All the mentioned processes are automated and require minimal human management. With all the work mentioned above, we enable a user to search for the data of interest, add it to desired visualization widgets (multiple are available for each type of data), and display on their screen with a set of interactive tools to manipulate the visuals. All within minutes, and while being in their home office. 

That’s not all—a user can save the visualizations and data states into a dashboard and share it with their colleagues sitting on a different continent, who can then open the exact same view in a matter of minutes. With more teams working remotely, this seamless collaboration helps facilitate collaboration and reduce data redundancy and errors. 

dash

Data Security

How do we keep this data secure? There are two layers of authentication and authorization implemented in such a system. First, AWS S3 has identity-based access guarantees that data can be visible to only authorized requests. IVAAP uses OAuth2 integrated with AWS Cognito to authenticate the user and authorize the requests. The user logs into the application and gets a couple of tokens that allow them to communicate with IVAAP services. The client passes tokens back to the IVAAP server. In the back end, IVAAP validates the same tokens with AWS Cognito whenever data reads need to happen. When validated, a new, temporary signed access token is issued by S3, which IVAAP uses to make the read from the file in a bucket.

Takeaways

Moving to the cloud isn’t a very simple task and poses a lot of challenges. By using technology provided by AWS and INT’s IVAAP and underlined by OSDU data standardization, we can create a low-latency data QC and visualization system which puts all the data into one place, provides tools to search for data of interest, enables real-time on-demand access to the data from any location with the Internet, and does all that in a secure manner.

For more information on IVAAP, please visit int.com/ivaap/ or to learn more about how INT works with AWS to facilitate subsurface data visualization, check out our webinar, “A New Era in O&G: Critical Components of Bringing Subsurface Data to the Cloud.”


Filed Under: IVAAP Tagged With: AWS, cloud, data visualization, digital transformation, ivaap, subsurface data visualization

Jul 16 2020

Jumpstart the Development of Your Next Cloud Application with GeoToolkit.JS and INTGeoServer

The Oil and Gas industry is turning to the cloud for its digital transformation. In the race to revolutionize E&P, companies are faced with a chicken-and-egg problem:

  • How to build cloud-based applications when the data is still within the confines of the company network?
  • Why move to the cloud when there are no applications that are able to use this data?

INT has been a long-time pioneer by providing JavaScript components that empower developers to build geoscience applications that run in a browser. The GeoToolkit.JS libraries cut years of development time for any company creating a new application or replacing a legacy system. However, the added value of this kind of application is not just in accessing and visualizing geoscience data, it’s also in the integration of the company’s knowledge within this application.

While GeoToolkit.JS provides the tools to visualize geoscience data, INTGeoServer provides the tools to access remote data. This server has been designed to serve seismic and well data efficiently to web clients. It uses the HTTP protocol and works natively with your existing files (such as SEG-Y, SEP, and LAS). In just a few clicks, you can have a running instance of INTGeoServer, upload files to the cloud, and visualize them immediately with GeoToolkit.JS.

Most customers using INTGeoServer elect to install several instances. To work efficiently with seismic data, INTGeoServer needs to be close to that data. Since E&P companies have their data scattered all over the globe, so are the installations of INTGeoServer, allowing access to datasets from multiple sources in one application. In a classic configuration, data ubiquity is typically achieved by deploying worldwide file systems. INTGeoServer optimizes remote data access by applying several techniques that networks cannot use: only sending the data that the GeoToolkit.JS client needs through the network, limiting the round trips, and compressing the data leveraging similarities between adjacent traces.
 
INTGeoServerNFSGraphic2
 
GeoToolkit.JS has a built-in API to access INTGeoServer instances. It only takes a few lines of code to program a JavaScript application that will read remote data and visualize it. As a result, programmers are free to focus on the added value of their application.

INTGeoServer also offers a nice transition from classic file systems to cloud-based storage. From the perspective of the web client, the code is storage-agnostic. While a company works on migrating its data to the cloud, its developers can use instances of INTGeoServer that are bound to the company network. Once the cloud is ready, no changes to the application are required. You do not need to decide in advance which cloud provider will host your data. INTGeoServer works with Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage. If your application serves data from multiple vendors, you can let each vendor choose their own cloud.

GeoToolkit.JS is meant to empower developers. It provides ready-to-use components that can be customized by developers outside of INT. Similarly, INTGeoServer is a platform. It has an API allowing you to add your own data formats, your own security, and your own science. As the audience of your application grows, you might elect to implement your own data server. INTGeoServer facilitates this future transition by documenting the HTTP protocol it implements. You are free to implement your own version of this protocol, keeping your JavaScript web application running without requiring any changes. In this particular scenario, INTGeoServer gives you a definitive time-to-market advantage.

GeoToolkit.JS allows requesting seismic data, log curves, trajectories and horizons form INTGeoServer. The following screenshot displays a cross-section display built with data located on INTGeoServer.

cross-section

LogCurve can be requested using a simple REST API from INTGeoServer and visualized inside WellLogWidget or MultiWellWidget.

Seismic data in different formats like SEG-Y, SEG-D, SU, and others can be indexed by a utility provided with INTGeoServer, and GeoToolkit.JS can leverage it using sophisticated queries. It is easy to request seismic sections using RemoteSeismicDataSource and specify an arbitrary path or INLINE and XLINE to get data located in cloud or private storage. Moreover, the seismic volume can be visualized in 3D with the Carnac3D module of GeoToolkit.JS.

crossline

As the industry continues to shift towards a digital transformation, more and more E&P companies will migrate their data to the cloud. And with the support of GeoToolkit.JS and INTGeoServer, it becomes simple and efficient to integrate, access, and visualize a company’s data within an application in the cloud.

For more information about GeoToolkit and INTGeoServer, visit the GeoToolkit product page or contact us for a free trial.


Filed Under: GeoToolkit, INTGeoServer Tagged With: cloud, data storage, geotoolkit, INTGeoServer

Jun 02 2020

INT Brings OpenVDS Java Binding to the OSDU Community

Recently, INT announced our partnership with Bluware and our integration of Bluware’s OpenVDS format into IVAAP, our enterprise data visualization platform. We are very excited about this partnership, as well as our collaboration with OSDU. This new format was designed to empower users to browse seismic data in the cloud with high performance and lower cost.

If you are not familiar with its capabilities: OpenVDS is a cloud-native way to store seismic data in the cloud. Unlike SEG-Y, which is linear, OpenVDS data is broken into small objects and stored in the cloud object store to provide very fast access to any part of the data. OpenVDS is serverless and supports any type of seismic data, including pre-stack.

Here’s an example of how seismic data can be stored in the cloud:

seismic-data-cloud
Graphic courtesy of Bluware Corp.

 

But with OpenVDS, you have the option to store headers in the hot tier and trace data in the cold or cool tier (to restore as needed).

openvds-anatomy
Graphic courtesy of Bluware Corp.
 

Through our process of integrating this format, we realized that we could offer a bit more functionality to help more users adopt OpenVDS by offering also a JAVA binding option. Here’s a timeline of our process:

Late March 2020

After completing the VDS integration into IVAAP, we started work on OpenVDS compatibility.

Unfortunately, there was no Java binding for OpenVDS at the time. With Java being the most popular platform for complex web application backends, it seemed it would be profitable to all if the OpenVDS technology was easily usable in these environments.

Thus, we decided to bring our expertise to the community and started working on an open-source Java binding.

April 2020

Our expert team worked on the binding. After testing different approaches, we decided to avoid automatic binding technologies (Swig, …) and to write the JNI code manually.

This decision would allow us to ensure finer control over memory management, allowing us to reduce the cost of memory transfers between Java and native C++ worlds.

We also paid particular attention to the stability and error management since this library is to be used in server backends with huge uptimes.

May 18, 2020

We’re done! We are proud to announce that the work of our experts has been accepted and merged into the OpenVDS repository.

A special thank you to Bluware for their support and to Roman Matyaschuk, Ilia Mikhailichenko, and Camille Perin with INT for making this a success story.

For more information on IVAAP, please visit www.int.com/products/ivaap/


Filed Under: IVAAP Tagged With: Bluware, cloud, ivaap, openVDS, OSDU, seismic data

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Footer

Solutions

  • For E&P
  • For OSDU Visualization
  • For Cloud Partners
  • For Machine Learning
  • For CCUS
  • For Geothermal Energy
  • For Wind Energy
  • For Enterprise
  • Tools for Developers
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2023 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2023 INTERACTIVE NETWORK TECHNOLOGIES, Inc