Groundbreaking research enabling applied AI solutions.

deepkapha.ai conducts ground-breaking yet practical AI Research so you can build your AI Solution rapidly.

Research

Our mission is to build practical and groundbreaking AI Research which companies and professionals can directly apply to their production systems.

Our goal is to provide AI Solutions by implemnting our algorithms, tools and technologies.

Our researchers and engineers are dedicated to working towards this goal.
To do so our team contributes relentlessly towards building practical software and algorithms.

We publish our research and present at leading conferences regularly but
adopt a unique strategy by applying it directly into industry verticals.
We believe that is the only way to walk the talk!

Research

Our mission is to build practical and groundbreaking AI Research which companies and professionals can directly apply to their production systems. Our goal is to provide AI Solutions by implemnting our algorithms, tools and technologies.

Our researchers and engineers are dedicated to working towards this goal. To do so our team contributes relentlessly towards building practical software and algorithms.

We publish our research and present at leading conferences regularly but
adopt a unique strategy by applying it directly into industry verticals.
We believe that is the only way to walk the talk!

May 27, 2018

Intra-thalamic and Thalamocortical Connectivity: Potential Implication for Deep Learning

Contrary to the traditional view that the thalamus acts as a passive relay station of sensory information to the cortex, a number of ex-perimental studies have demonstrated the effects of peri-geniculate and cortico-thalamic projections on the transmission of visual in- put. In the present study, we implemented a mechanistic model to facilitate the understanding of perigeniculate and corticothalamic effects on the transfer function of geniculate cells and their firing patterns. As a result, the model successfully captures some funda- mental properties of early-stage visual processing in mammalian brain. We conclude, therefore, that the thalamus is not a passive relay center and the intra-thalamic circuitry is of great importance to biological vision. In summary, intra-thalamic and thalamocortical circuitry has implications in early-stage visual processing, and could constitute a valid tool for refining information relay and compression in artificial neural networks (ANN), leading to deep learning models of higher performance.
Coming up
May 22, 2018

ARiA

Utilizing Richard’s Curve for Controlling the Non-monotonicity of the Activation Function in Deep Neural Nets

This work introduces a novel activation unit that can be efficiently employed in deep neural nets (DNNs) and performs significantly better than the traditional Rectified Linear Units (ReLU). The function developed is a two parameter version of the specialized Richard’s Curve and we call it Adaptive Richard’s Curve weighted Activation (ARiA). This function is non-monotonous, analogous to the newly introduced Swish, however allows a precise control over its non-monotonous convexity by varying the hyper-parameters. We first demonstrate the mathematical significance of the two parameter ARiA followed by its application to benchmark problems such as MNIST, CIFAR-10 and CIFAR-100, where we compare the performance with ReLU and Swish units. Our results illustrate a significantly superior performance on all these datasets, making ARiA a potential replacement for ReLU and other activations in DNNs.

Coming up

Coming soon: DeepSwitch

The solutions found by the adaptive algorithms like Adam fail to generalize as well as SGD in certain scenarios even though adaptive methods usually perform well on training set. So, there is often tradeoff between testing accuracy and performance update at local optimas. Keskar et. al have showed that the adaptive methods work better in the initial portion of the training but with later portion, SGD seems to work better. The basic premise of this work is to investigate the use of fuzzy logic to extend the phenomenon to a more generic, and robust control for optimizer switching. Unlike the prior work, we also incorporate quasi-Newtonian optimizers as well as other adaptive optimizers than Adam, and work out a switching logic that maximizes the generalization accuracy while having minimal effect on training time.
Coming up

Join us

Our DeepRRP (Deep Learning Remote Residency) Program is currently in beta open for AI Researchers who are already conducting some form of research in Machine Learning , Deep Learning or Artificial Intelligence.

For more information on our selection criteria, please click the enroll button below.