Core technologies
Core technologies
  • Tensorflow
  • Keras
  • Python
  • PyTorch
  • Jupyter hub
  • SciPy
  • Numpy
Performance technologies
Performance technologies
  • Weka
  • Druid
  • Scikit-learn
  • Matplotlib
  • Tensorboard
  • Julia
  • R
  • Pandas
  • D3js
  • PySpark
  • OpenCV
  • Spark ML
Enabling technologies
Enabling technologies
  • XGBoost
  • Seaborn
  • Matlab
  • TF Privacy
  • TF Cloud
  • GNU Octave
  • TF Model Analysis
  • MindsDB
  • CleverHans
  • Google Charts
Candidate technologies
Candidate technologies
  • Elyra
  • Kedro
  • DL4J
  • Gensim
  • Cuml
  • SpaCy
  • CUDA-X
  • Google's Differential Privacy
  • WIT Tool
  • Polyaxon
  • MXNET
  • Rasterio
  • Intel MKL-DNN

FRS
Maturity
analysis
Ranking
Functionality
Reliability
Scalability

Key lessons from our ML practitioners

  • Transfer learning with DSM

    The use of multiple regularization techniques while optimizing the feature map for target tasks helps avoid the overall drop in performance and negative transfers when applying transfer learning to Domain Specific Models.

  • Model fine-tuning is key

    Tuning models during and post training through quantization and pruning could reduce model size by a factor of 4 & improves speed without impacting precision which is critical to AI on edge devices.

  • The inevitable dimensionality

    Having a good sense for the data and working with business domain experts to analyze and evaluate every move could help in avoiding the curse of dimensionality while training classifiers in high-dimensional spaces.

From model-centric to data-centric Machine Learning

AI system today consist mainly of machine learning algorithms that manipulate and process data to build predictive models. While continuously working on ML models and trying different techniques to optimize neural networks is critical to the success of AI products; improving data quality and data labeling process is found to be of a greater importance to AI initiatives.

Consistency of data collection and the manual labeling of the data could make all the difference in setting "the Ground Truth" in data sets for machine learning algorithms.

While our team puts a lot of emphasis on every step of the lifecycle of our machine learning products, including model architecture, model training, error analysis & iterative improvements, we also design project-specific frameworks and processes for data acquisition, augmentation and labeling that significantly impacts model accuracy when tested in production against real-world scenarios.

Driving the next evolution of AI

butterfly

The ongoing advancements in AI is anticipated to create significant business opportunities and societal value both on the short and long term.

At area99 we take up the mantle and charge ahead, working with business partners and research institutes to commercialize scientific discoveries, and create AI solutions that serve society and bring more radical innovations for humans in the future.