Call Us : US +1-281-660-4235, India +91-9028001177

Case Studies

Case Study-1

Background

The growing infrastructural development in the sake of urbanization is evident today. On account of such huge demand from the infrastructure side, the hot rolled coils market has paced up inadvertently.

This company is US based, market leader in supply chain and logistics management of the hot rolled coils.

It has 100% contract performance rate by facilitating sophisticated pricing models. The astute process and risk management capabilities set the company ahead of its competitors in various parameters. In compliance with just-in-time delivery, the company maintains the record of consistent inventory engagement and improvements. This way the company has created a big hub in the US market.

Problem Statement

Since the company is the market leader and doing very well in it's geography of operation, it gets orders every now and then. With such a vast involvement in multiple tasks, it generates a flood of data.

  • a) Maintaining and managing such a large amount of data becomes an unavoidable challenge for the company, as this data can't be neglected and can help in company's further growth plans.
  • b) Moreover, a major chunk of operational expenses is incurred against the resources that manage the data. Therefore, the organization was looking for cloud migration. Migrating data over the cloud would vacate the large space that was occupied. Also, access to these data can be pacified by giving access to authentic sources.
  • c) The company was also looking for real-time data replication between On Cloud Postgre SQL and Oracle DB.

Bizmetric Solution

To proceed with, we did the exhaustive requirement gathering of the infrastructural set up of the organization. Post requirement gathering, we did the connectivity check and found that their source system (ERP System) was on public cloud whereas the reporting system was on-premise. The first challenge was to move the data from cloud to on-premise. To replicate the data, we moved forward with two approaches: -

  • a) In the 1st approach, we deployed a log-based CDC Tool named Apache Debezium. Here, debezium, in combination with Apache Kafka, operated on the lock of the database. Since, the deployment of debezium enables the application to respond to queries like insert, update, delete, etc. quickly, it offered more durable and fast services. The debezium PostgreSQL was connected to monitor, which recorded the row-level in the schema of PostgreSQL database. Since the snapshot of the schema was analyzed consistently, any modification in the PostgreSQL was continuously reported. Kafka kept the record of all the events for easy consumption by application or services.
  • b) In the 2nd approach, we moved forward with two cluster computing frameworks named Apache Spark & Apache Parquet for direct table query in close readiness. Apache spark enabled the fast data processing of in-memory queries. We consider the approach of dimensional modeling by transforming and loading data to unmanaged parquet staging partition table. This modeling technique eases up the interpretation of the information by grouping them into coherent business categories.

Solution Highlights

Hadoop Platform:- Apache Kafka, Apache Spark, Apache Parquet
Programming Languages:- Python
Cloud :- private cloud
DATA LAKE :- Apache Parquet, Apache Spark SQL
DATABASE :- PostgreSQL, Oracle
CDC tool :- Debezium
BI/Search :- Superset, Elasticsearch

Achievements

  • Data Replication in 10 minutes
  • Management of $1 Million Overhead Expenses
  • 30% savings in cost Procurement

Case Study-2

Background

To arrive at the root cause of any issue involving cell studies, the ultra research-oriented work at the DNA level is required.

Our Client is one of the premier research institutes, aided by the Department of Biotechnology, Government of India.

Cell-culture, cell repository & immunology are some of the areas where the organization holds exceptional expertise.

The high-end results produced by this body after exhaustive research work has got immense relevance in the healthcare & pharmaceutical industry. The upgrading research techniques & methodologies adopted by this organization is paving new dimensions to enhance the output constantly.

Problem Statement

Since the company is into the research task, many processes are in the pipeline that needs sequential execution. To ease and accelerate the process of tasks, the organization was looking for smart automation. The pain areas that needed quick mitigation were as follows:-

  • a) Improving the accuracy of the particle picking
  • b) To enhance the output of the research work
  • c) Automating the manual operation & 3-D representation of the picked particles.

Bizmetric Solution

We initiated with a series of steps. The very first step was CTF Estimation followed by its assessment. After assessment, particle picking is done where the model are trained for more accurate results. The particle extraction process helps in obtaining stable particles. The 2-D Clustering helps in sorting out the bad samples from the good samples. To enhance the visualization, beautifier is deployed. After the creation of the stack subset, the 3-D Model process is initiated. The creation of intermediate resolution using best stack subset is part of the 3-D model process. The final step is the 3-D Refinement process that enhances the 3-D model result.


Solution Highlights

Platform :- Tensorflow/Keras
Programming Language :- Python
Model/Technique :- YOLO
Programming Suits :- CR-YOLO/Sphire

Achievements

  • 85% accuracy in particle picking
  • 3-D representation of particles was achieved
  • Projections clearly exhibited