• Skip to main content
  • Skip to footer

INT

Empowering Visualization

CONTACT US SUPPORT
MENUMENU
  • Products
    • Overview
    • IVAAP™
    • INTViewer™
    • GeoToolkit™
    • Product Overview
  • Demos
    • GeoToolkit Demos
    • IVAAP Demos
  • Success Stories
  • Solutions
    • Overview
    • For E&P
    • For OSDU Visualization
    • For Cloud Partners
    • For Machine Learning
    • For CCUS
    • For Geothermal Energy
    • For Wind Energy
    • For Enterprise
    • Tools for Developers
    • Services Overview
  • Resources
    • Blog
    • Developer Community
    • FAQ
    • INT Resources Library
  • About
    • Overview
    • News
    • Events
    • Careers
    • Meet Our Team
    • About INT

API

May 20 2021

Deploying IVAAP Services to Google App Engine

One of the productivity features of the IVAAP Data Backend SDK is that the services developed with this SDK are container-agnostic. Practically, it means that a REST service developed on your PC using your favorite IDE and deployed locally to Apache Tomcat will run without changes on IVAAP’s Play cluster.

While the Data Backend SDK is traditionally used to serve data, it is also a good candidate when it comes to developing non-data-related services. For example, as part of IVAAP 2.8, we worked on a gridding service. In a nutshell, this service computes a grid surface based upon the positions of a top across the wells of a project. When we tested this service, we didn’t deploy it to IVAAP’s cluster; it was deployed as a standalone application, as a servlet, on a virtual machine (VM).

Deploying Apache Tomcat on a virtual machine is “old school”. Our customers are rapidly moving to the cloud, and while VMs are often a practical choice, other options are sometimes available. One of these options is Google App Engine. Google App Engine is a bit of a pioneer of cloud-based deployments. It was the first product that allowed servlet deployments that scale automatically, without having to worry about the underlying infrastructure of virtual machines. This “infinite” scalability comes with quite a few constraints, and I was curious to find out whether services developed with the IVAAP Data Backend SDK could live within these constraints (spoiler alert: it can).

Synchronous Servlet Support

The first constraint was the lack of support for asynchronous servlets. Google App Engine doesn’t support asynchronous servlets and the IVAAP servlet shipped with the SDK is strictly asynchronous. Supporting the synchronous requirements of Google App Engine didn’t take much time. The main change was to modify the concrete implementation of
com.interactive.ivaap.server.servlets.async.AbstractServiceRequest.waitForResponse
and wait on a java.util.concurrent.CountDownLatch instead of calling javax.servlet.startAsync().

Local File Access

The second constraint was the lack of a local file system. Google App Engine doesn’t let developers access the local files of the virtual machine where an application is deployed. The IVAAP Data Backend SDK typically doesn’t make much use of the local file system, except at startup when it reads its service configuration. To authorize users, the services developed with the IVAAP Data Backend SDK need to know how to validate Bearer tokens, and this validation requires the knowledge of the host name of the IVAAP Admin Backend. The Admin Backend exposes REST services for the validation of Bearer tokens. To support Google App Engine, I had to make the discovery of these configuration files pluggable so that they can be read from the WEB-INF directory of the servlet instead of a directory external to that servlet.

Persistence Mechanism

The third constraint was the lack of persistence. Google App Engine doesn’t provide a way to “remember” information between two HTTP calls. To effectively support computing services, a REST API cannot make an HTTP client “wait” for the completion of this computing. The computation might take minutes, even hours. The REST API of a computing service has to give a “ticket” number back to the client when a process starts, and provide a way for this client to observe the progress of that ticket, all the way to the completion. In a typical servlet deployment, there are many options to achieve this: the service can use the Java Heap to store the ticket information or use a database. To achieve the same result with Google App Engine, I needed to pick a persistence mechanism. For simplicity’s sake, I picked Google Cloud Storage. The state of each ticket is stored as a file in that storage. 

Background Task Executions

The fourth constraint was the lack of support for background executions. Google App Engine by itself doesn’t allow processes to execute in the background. Google however provides integration with another product called Google Cloud Tasks. Using the Google Cloud Tasks API, you can submit HTTP requests to a queue, and Google Cloud Tasks will make sure these requests get executed eventually. Essentially, when the gridding service receives an HTTP request, it creates a ticket number, submits this HTTP request immediately to Google Cloud Tasks, which in turn calls back Google App Engine. The IVAAP service recognizes that the call comes from Google Cloud Tasks and stores the result to a file in Google Cloud Storage instead of the servlet output stream. It then notifies the client that the process has completed.

Here’s a diagram that describes the complete workflow: 

INT_GCP_Workflow

Constraints and Considerations

While the SDK did provide the API to implement this workflow out of the box, getting this to work took a bit of time. I had to learn 3 Google products at once to get it working. Also, I encountered obstacles that I will share here so that other developers benefit:

  1. The first obstacle was that the Java SDK for Google App Engine requires the Eclipse IDE. There is no support for the NetBeans IDE. I am more proficient with NetBeans.
  2. The second obstacle was that I had to register my Eclipse IDE with Google so I can deploy code from that environment. It just happened that that day, the Google registration server was having issues, blocking me from making progress.
  3. The third obstacle was the use of Java 8. The Google Cloud SDK required Java 8, but Eclipse defaulted to Java 11. It took me a while to understand the arcane error messages thrown at me.
  4. The fourth obstacle was that I had to pick a flavor of Google App Engine, either “Standard” or “Flexible”. The “Standard” option is cheaper to run because it doesn’t require an instance running at all times. The “Flexible” option has less warmup time because there is always at least one instance running. There are many more differences, not all of them well documented. The two options are similar, but do not share the same API. You don’t write the same code for both environments. In the end, I picked the “Standard” option because it was the most constraining, better suited to a proof of concept.
  5. The fifth obstacle was the confusion due to the word “Promote” used by the Google SDK when deploying an instance. In this context, “Promote” doesn’t mean “advertising”, it means “production”. For a while, I couldn’t figure out why my application wouldn’t show any changes where I expected them. The answer was that I didn’t “promote” them.
  6. The last obstacle was the logging system. Google has a “Google Logging” product to access logs produced by your application. Logging is essential to debugging unruly code that you can’t run locally. Despite several weeks of use, I still haven’t figured out how this product really works. It is designed to be used to monitor an application in production, not so much for debugging. Debugging with logs is difficult. There might be several reasons why you can’t find a log. The first possibility is that the code doesn’t go where you think it’s going, and the log is not produced. The second possibility is that the log was produced, but I am too impatient, there is a significant delay and it hasn’t shown up yet. The third possibility is that it has shown up, but is nested inside some obscure hierarchy, and you won’t see it unless you expand the entire tree of logs. The log search doesn’t help much and has some strange UI quirks. I found that the most practical way to explore logs is to download them locally, then use the search capabilities of a text editor. Because the running servlet is not local to your development environment, debugging a Google App Engine application is a time-consuming activity.

In the end, the IVAAP Data Backend SDK passed this proof of concept with flying colors. Despite the constraints and obstacles of the environment, all the REST services that were written with the IVAAP Cluster in mind are compatible with Google App Engine, without any changes. Programming is hard, it’s an investment in time and resources. Developing with the IVAAP Data Backend SDK preserves your investment because it makes a minimum amount of assumptions on how and where you will run this code.

For more information or for a free demo of IVAAP, visit int.com/products/ivaap/.


Filed Under: IVAAP Tagged With: API, cloud, Google, Google App Engine, ivaap, SDK

Jan 12 2021

Comparing Storage APIs from Amazon, Microsoft and Google Clouds

One of the unique capabilities of IVAAP is that it works with the cloud infrastructure of multiple vendors. Whether your SEGY file is posted on Microsoft Azure Blob Storage, Amazon S3 or Google Cloud Storage, IVAAP will be capable of visualizing it.

It’s only when administrators register new connectors that vendor-specific details need to be entered.  For all other users, the user interface will be identical regardless of the data source. The REST API consumed by IVAAP’s HTML5 client is common to all connectors as well. The key component that does the hard work of “speaking the language of each cloud vendor and hiding their details to the other components” is the IVAAP Data Backend.

While the concept of “storage in the cloud” is similar across all three vendors, they each provide a different API to achieve similar goals. In this article, we will compare how to implement 4 basic functionalities. Because the IVAAP Data Backend is written in Java, we’ll only compare Java APIs.

 

Checking that an Object or Blob Exists

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
try {
    HeadObjectRequest.Builder builder = HeadObjectRequest.builder().bucket(bucketName).key(keyName);
    s3Client.headObject(request);
    return true;
} catch (NoSuchKeyException e) {
    return false;
}

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder().endpoint(endpoint).credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
BlobClient blobClient = containerClient.getBlobClient(blobName);
return blob.exists();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.getBlob(bucketName, blobName, BlobGetOption.fields(BlobField.ID));
return blob.exists();

 

Getting the Last Modification Date of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
HeadObjectResponse headObjectResponse = s3Client.headObject(headObjectRequest);
return headObjectResponse.lastModified();

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);            BlobProperties properties = blob.getProperties();
return properties.getLastModified();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(Storage.BlobField.UPDATED));
return blob.getUpdateTime();

 

Getting an Input Stream out of an Object or Blob

Amazon S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String keyName = …
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(bucketName)
.key(keyName)
.build();
return s3Client.getObject(getObjectRequest);

Microsoft Azure Blob Storage

String accountName = …
String accountKey = …
String containerName = …
String blobName = …
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.blob.core.windows.net", accountName);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobClient blob = client.getBlobClient(containerName, blobName);
return blob.openInputStream();

Google Cloud Storage

String authKey = …
String projectId = …
String bucketName = …
String blobName = …
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Blob blob = storage.get(bucketName, blobName,  BlobGetOption.fields(BlobField.values()));
return Channels.newInputStream(blob.reader());

 

Listing the Objects in a Bucket or Container While Taking into Account Folder Hierarchies

S3

String awsAccessKey = …
String awsSecretKey = …
String region = …
String bucketName = …
String parentFolderPath = ...
AwsCredentials credentials = AwsBasicCredentials.create(awsAccessKey, awsSecretKey);
S3Client s3Client = S3Client.builder().credentialsProvider(credentials).region(region).build();
ListObjectsV2Request.Builder builder = ListObjectsV2Request.builder().bucket(bucketName).delimiter("/").prefix(parentFolderPath + "/");
ListObjectsV2Request request = builder.build();
ListObjectsV2Iterable paginator = s3Client.listObjectsV2Paginator(request);
Iterator<CommonPrefix> foldersIterator = paginator.commonPrefixes().iterator();
while (foldersIterator.hasNext()) {
…
}

Microsoft

String accountName = …
String accountKey = …
String containerName = …
String parentFolderPath = ...
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
BlobServiceClientBuilder builder = new BlobServiceClientBuilder()
.endpoint(endpoint)
.credential(credential);
BlobServiceClient client = builder.buildClient();
BlobContainerClient containerClient = client.getBlobContainerClient(containerName);
Iterable<BlobItem> iterable = containerClient.listBlobsByHierarchy(parentFolderPath + "/");
for (BlobItem currentItem : iterable) {
   …
}

Google

String authKey = …
String projectId = …
String bucketName = …
String parentFolderPath = ...
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(authKey);
ByteArrayInputStream in = new ByteArrayInputStream(mapper.writeValueAsBytes(node));
GoogleCredentials credentials = GoogleCredentials.fromStream(in);
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
                        .setProjectId(projectId)
                        .build()
                        .getService();
Page<Blob> blobs = cloudStorage.listBlobs(bucketName, BlobListOption.prefix(parentFolderPath + "/"), BlobListOption.currentDirectory());
for (Blob currentBlob : blobs.iterateAll()) {
 ...
}

 

Most developers will discover these APIs by leveraging their favorite search engine. Driven by innovation and performance, cloud APIs become obsolete quickly. Amazon was the pioneer, and much of the documentation still indexed by Google is for the v1 SDK, while the v2 has been available for more than two years, but wasn’t a complete replacement. This sometimes makes research challenging for the simplest needs. Microsoft has migrated from v8 to v12 a bit more recently and has a similar challenge to overcome. Being the most recent major player, the Google SDK is not dragged down much by obsolete articles.

The second way that developers will discover an API is by using the official documentation. I found that the Microsoft documentation is the most accessible. There is a definite feel that the Microsoft Azure documentation is treated as an important part of the product, with lots of high-quality sample code targeted at beginners.

The third way that developers discover an API is by using their IDE’s code completion. All cloud vendors make heavy use of the builder pattern. The builder pattern is a powerful way to provide options without breaking backward compatibility, but slows down the self-discovery of the API. The Amazon S3 API also stays quite close to the HTTP protocol, using terminology such as “GetRequest” and “HeadRequest”. Microsoft had a higher level API in v8 where you were manipulating blobs. The v12 iteration moved away from apparent simplicity by introducing the concept of blob clients instead. Microsoft offers a refreshing explanation of this transition. Overall, I found that the Google SDK tends to offer simpler APIs for performing simple tasks.

There are more criterias than simplicity, discoverability when comparing APIs. Versatility and performance are two of them. The Amazon S3 Java SDK is probably the most versatile because of the larger number of applications that have used its technology. It even works with S3 clones such as MinIO Object Storage (and so does IVAAP). The space where there are still a lot of changes is asynchronous APIs. Asynchronous APIs tend to offer higher scalability, faster execution, but can only be compared in specific use cases where they are actually needed. IVAAP makes heavy use of asynchronous APIs, especially to visualize seismic data. This would be the subject of another article. This is an area that evolves rapidly and would deserve a more in-depth comparison.

For more information on IVAAP, please visit www.int.com/products/ivaap/

 


Filed Under: IVAAP Tagged With: API, cloud, Google, ivaap, java, Microsoft

Apr 23 2020

Opening IVAAP to Your Proprietary Data Through the Backend SDK

When doing demos of IVAAP, the wow factor is undeniably its user interface, built on top of GeoToolkit.JS. What users of IVAAP typically don’t see is the part accessing the data itself, the IVAAP backend. When we designed the IVAAP backend, we wanted our customers to be able to extend its functionalities. This is one of the reasons we chose Java for its programming language—customers typically have access to Java programmers.

Java is the programming language; it is a well-known, generic-purpose language, but the IVAAP Backend Software Development Kit (SDK) is typically only discovered during an IVAAP evaluation. In previous articles, I described the Lookup API (How to Empower Developers with à la Carte Deployment in IVAAP Upstream Data Visualization Platform) and the use of scopes (Using Scopes in IVAAP: Smart Caching and Other Benefits for Developers). As the SDK has grown, I thought it would be a good time to review what else this SDK provides.

One Optimized Use Case: Plugging Your Own Data

The most common question that I get is: “I see that you can access a WITSML datasource, a PPDM database. I have my own proprietary store for geoscience data, what do I need to do to make IVAAP visualize the data for my data store?” This is where the SDK comes into play. You do not need to modify IVAAP backend’s code to add your own data. In a nutshell, you just need to write a few Java classes, compile them, and add them to your IVAAP deployment.

The Java classes you write need to meet the Application Programming Interface (API) that the SDK defines. If you are a developer, this answer is not enough, this is the textbook definition of a SDK. What makes the IVAAP Backend SDK efficient for our use case is that you only need to write the API for the data you have. Since IVAAP’s built-in data model allows the visualization of maybe 30 different aspects of a well (log curves, deviations, tubing sets, mud logs, raster logs, etc), you only need to write classes for the data you have. For example, to visualize log curves, regardless of how these curves are stored, you only need to write about a dozen classes for a complete implementation.

The next question I get at this point is: “How do I know what to write?”. There is a large amount of documentation available. During the evaluation process, you are granted access to our developers site. This site is a reference used by all INT developers working on the IVAAP backend, whether they are developing IVAAP itself, or creating plugins for customers. It’s a Wiki and gets updated regularly. When I get support questions about the SDK, I typically will write an article in that Wiki and share the link. This is not the only piece of documentation available. There is a classic JavaDoc documentation that details the API in a formal manner. And there is also sample code. We created a sample connector to a SQL database storing well curves, trajectories, well locations and schematics as a practical example on how to use the SDK.

An Extensive Geoscience Data Model to Leverage

Lots of work has been done in IVAAP to facilitate workflows associated with wells, whether they are drilling workflows, production monitoring workflows, or just to manage an inventory. Specifically, IVAAP has a data model to expose the location of wells, log curves, deviation curves, mud logs, schematics, fracking, core images, raster logs, tops and any type of well documentation. Wells are not the only data models that IVAAP includes. Other models exist for seismic data and reservoirs. Several types of surfaces are also supported such as faults, grid surfaces, triangle meshes and seismic horizons.

These data models were built over-time based upon the common denominator between models coming from different systems. For example, if you are familiar with WITSML, you will find that the definition of a well log resembles what WITSML provides, but is flexible enough to also support LAS and DLIS files. From a developer perspective, the data model is exposed through the SDK’s API, without making any assumption on how this data is stored. The data model works for data stored in the cloud, on a file system, in a SQL database, and even data exposed only through a web service. While most of IVAAP’s connectors access one form of data store at a time, some connectors mix storages to combine data from web services and cloud storages. IVAAP’s data model is storage-agnostic, and the services to expose this data model to the HTML5 client are storage-agnostic as well.

IVAAP covers the most common data types found in geoscience. It provides the services to access this data, and the UI to visualize it. When starting an IVAAP development project, most developers should only have to focus on plugging their data, expressing through the SDK’s API on how to retrieve this data.

An API to Customize Entitlements

There is one more way that the IVAAP SDK makes the developer experience seamless when plugging a proprietary datastore. Not only does no code have to be written to expose this data to the viewer, but no code has to be written to control who has access to which data. Both aspects are built-in into the code that will call your implementation. You only have to write the data access layer, and not worry about entitlements or web services. By default, entitlements are based upon the information entered in the IVAAP Administration application.

This separation between data access and entitlements saves development time, but there are cases when a data store controls both data and access to this data. When IVAAP needs to access such an integrated system, the entitlement checks layer needs to be performed by the data access code. The entitlement API allows these checks to be performed at the data level.

The entitlement API is actually very fine-grained. You can customize the behavior of each service to limit access to specific data points. For example, the default behavior of IVAAP is to grant access to all curves of a well when you have been granted access to that well. Depending on your business rules, you might elect to restrict access to specific log curves. The SDK doesn’t force you into an “all or nothing” decision.

An API to Implement Your Own REST Services

Another typical use case is when you need to give access to data that doesn’t belong to the IVAAP built-in data model. In this particular situation, you need to extend IVAAP by adding custom widgets, and ad-hoc web services are needed to expose the relevant data to this widget. There is of course an API for this. External developers use the same API as INT developers to implement web services. INT has developed more than 500 REST services using this API, and external developers benefit from this experience.

Most services are JSON-based, and IVAAP uses the jackson libraries to create JSON content. To advertise capabilities to the HTML5 client, the IVAAP backend uses HATEOAS links. For example, if the JSON description of a well has a link to the mud logs services, then this well has mud logs. If this link is not present, the HTML5 client understands that this well doesn’t contain mud logs, and will adapt its UI accordingly. If you were to add your own service exposing more data associated with a well, you would typically want to add your own HATEOAS to the description of wells. Adding HATEOAS links to existing services is possible by plugging so-called Entity classes. You do not need to modify the code of this service to modify its behavior.

IVAAP’s REST services follow the OpenAPI specifications. There is actually a built-in web service whose only purpose is to expose the available services in the classic Swagger format. IVAAP’s SDK uses annotations similar to the Swagger Annotations API. If you are familiar with this API, documenting your own REST services should be a breeze.

Most of the REST services are JSON-based, but sometimes binary streams are used instead for performance reasons. Binary streams are typically used in IVAAP to expose seismic data, but also surfaces. The SDK uses events to implement such streaming services.

An API to Implement Your Own Real Time Feeds

The service API is not limited to REST services. An API is also available to communicate with the IVAAP HTML5 client through websockets. The WebSockets API is typically used to implement real time communications between the client and the server. For example, when a user opens a well, the user interface uses websockets to send a subscription message to the backend, requesting to be notified if this well changes. This enables a whole set of capabilities, such as real time monitoring. This is the API we use to monitor wells from WITSML datasources. The SDK includes an entire set of hooks so that customers can write their own feeds, including subscription, unsubscription and broadcast of messages.

When you write REST services, the container details are abstracted away and you only need to worry about implementing domain-related code. A REST service working in a Tomcat based development environment will work without any modification in a Play cluster. Likewise, feeds developed with the SDK work seamlessly in both Tomcat and Play. On a developer station, the SDK will use end points from the Servlet API to carry messages. In a Play cluster, the SDK will use ActiveMQ. ActiveMQ allows scalability and reliability features that servlets miss, such as high-rate of messages, and reliable delivery of messages. The use of ActiveMQ is transparent to the developers of feeds.

Utilitarian APIs

There is more to the IVAAP SDK than its APIs to access data, write services or customize entitlements. There are a few other APIs worth mentioning. One of them is the API to perform CRS conversions. Its default implementation uses Apache SIS, but the API itself is generic in nature. CRS conversions are often needed in geoscience, for example to visualize datasets on a map, on top of satellite imagery. Years of work has been built into the Apache SIS library, and virtually no work is needed by IVAAP developers to leverage this library when the SDK is used.

There are also APIs to execute code at startup and to query the environment that IVAAP is running on. The Lookup API gives access to the features that are plugged. The DataSource API indicates which data sources are configured to run in the JVM. The Hosted Services API provides an inventory of the external services that an IVAAP instance needs to interact with. A hosted service could be the REST service that evaluates formulas, or the machine learning system that IVAAP feeds its data to.

A “Developer-Friendly” Development Environment

We made lots of efforts to make sure the development process would be as simple as possible. Developers with experience with Java Servlets will be at ease with their IVAAP development environment. They will use tools they are familiar with such as Eclipse and Tomcat. A production instance of IVAAP doesn’t use servlets, it uses the Play framework. By following the SDK’s API, it is virtually transparent to developers that their code will be deployed in a cluster.

There are a few instances where awareness of the cluster environment is needed. For example, when caching is involved, you want to make sure that all caches are cleared across all JVMs when data gets updated. The IVAAP SDK includes an API to send and receive cluster events, and to create your own events. Since events are serialized from/to JSON, instances in the cluster do not need to share the same build version to interact with each other. This was a deliberate design choice so that you can upgrade your cluster while it’s running, without service interruption.

Caching is a large topic, outside of the scope of this article. IVAAP’s SDK proposes a “DistributedStore” API that hides the complexity of sharing state across JVMs. As long as you use this API, code that caches data will work without any modification in a single-JVM development environment and a multiple-JVMs production environment.

Finally, the SDK’s API is designed to allow fast iterative development. For example, once you have implemented the two classes that define how to list wells in your datastore, you can test them right away with Postman. Earlier I wrote that plugging your own log curves requires about a dozen classes. There is no need to write all twelve to start seeing results. Actually, you do not need to launch Postman to test your web services. You can test services using JUnit. A REST service written with the SDK can be tested with JUnit. This saves time by eliminating the need to launch Tomcat.

When you evaluate IVAAP, you might not have enough time to grasp the depth of the IVAAP SDK. Hopefully, this guide will help you get started.


Filed Under: IVAAP Tagged With: API, geoscience, ivaap, java, REST, SDK

Oct 24 2018

My Experience at INT with IVAAP: A First Look as a Developer

I started at INT a few weeks ago and my first task as a new INT developer was to add a data connector to IVAAP, INT’s HTML5 visualization framework for upstream E&P solutions.

As a new member of the software development team, I had no prior experience with development on this platform. To gain knowledge of IVAAP and to understand more about the IVAAP software development kit, I used the IVAAP developer’s guide. I found this guide quite useful as it made the key points behind IVAAP easily understandable.

With only a few years of experience with Java, I was surprised by the lookup system. IVAAP has a microservices REST architecture and is very modular in nature, and the lookup system ties all these modules together. It’s quite powerful, but this is something I had never encountered before.

Coding with IVAAP uses a simple model where each entity implementation consists of implementing a POJO (Plain Old Java Object) class and its finder. This paradigm is consistent throughout the entire code. Essentially, for this project, I plugged only a few classes:

  • A data source type class
  • A data source class
  • A log curve class and its finder
  • A log curve data series class and its finder
  • A log curve data frame class and its finder

Implementing these classes essentially consists of following templates, where the public API provides hooks and the developer adds the implementation specific to their project. The public API is documented, making it clear what each method or class is meant to perform.

Even though this project is to be deployed on Linux, my development environment was on Windows. It consisted of an Integrated Development Environment (the NetBeans IDE) and the Postman tool for testing individual services. This project accessed a SQL server database, so I used dbVisualizer to browse the data.

Since this project only included adding a data connector to IVAAP, I didn’t try to add new services, only extend the data sources it supports. Building a connector on top of the existing web services allowed me to validate my work as it progressed. For example, after plugging a new data source type, I could immediately verify that it worked as intended using Postman and following the HATEOAS links. This remained true when I plugged a data source and each finder. No need to wait until all classes are plugged to verify that the logic works. I also found that the error management built into IVAAP helped me be efficient since the error report made it easy to trace the actual issue.

The learning curve of the IVAAP software development kit is gradual. The API guides you. Unlike some of the frameworks I have worked with, there is no prior knowledge necessary to get started. You can be effective from day one with just basic Java knowledge.

Visit our products page for more information about IVAAP or contact us for a demo.


Filed Under: IVAAP Tagged With: API, ivaap, microservices, SDK

Footer

Solutions

  • For E&P
  • For OSDU Visualization
  • For Cloud Partners
  • For Machine Learning
  • For CCUS
  • For Geothermal Energy
  • For Wind Energy
  • For Enterprise
  • Tools for Developers
  • Customer Success Stories

Products

  • IVAAP
  • GeoToolkit
  • INTViewer
  • IVAAP Demos
  • GeoToolkit Demos

About

  • News
  • Events
  • Careers
  • Management Team

Resources

  • Blog
  • FAQ

Support

  • JIRA
  • Developer Community

Contact

INT logo
© 1989–2023 Interactive Network Technologies, Inc.
Privacy Policy
  • Careers
  • Contact Us
  • Search

COPYRIGHT © 2023 INTERACTIVE NETWORK TECHNOLOGIES, Inc