I built this website to better quantify my research and experience in engineering and artificial intelligence. It contains a synopsis of my portfolio and details for each one of the products I have built. Some I can't talk about due to proprietary agreements, but those that I can are detailed below.
Most of my recent work has been in the founding and leading of a startup in developing a new type of nanotechnology enabling the sense of smell for robotics. Much of this is still under “stealth“; for more details, please contact me.
Most of my recent research is in multi-agent reinforcement learning (swarm intelligence) and active learning (learning on-the-fly with minimal data). My doctoral research continues this work by exemplifying it in machine olfaction, showing how robots can navigate by scent.
For my second stint at Toyota, I work with an incredibly talented team in building generative AI and robotics technologies within the enterprise.
I worked on AI and autonomy for aircraft. That's about all I can say.
I am a visiting scientist for the research and development of a multimodal nano-sensing technology using electrochemical detection, computer vision, and spectrometry. We are using sensor fusion to give machines the ability to smell by learning from other modalities.
While at Toyota, my role was in building dialog-based NLP models for a multilingual voice assistant. I focused particularly on how to train an existing English voice assistant to learn 40+ languages and how to autonomously transfer learn new domains through LLM-based prompt-tuning. This involved the construction of multiple training pipelines and advanced dimensionality reduction techniques in order to appropriately learn the data space that could inform the required prompts. I also developed multimodal segmentation models using lidar and vision, and worked on how to fuse those modalities together for scene understanding.
After working in flight sensor systems and optics for the defense industry (GMRE, Raytheon), I started a small machine learning company in 2019 called Seekar Technologies that specialized in automatic target recognition (ATR) and autonomy for defense-related applications. I helped fork Seekar's ATR technology to also be used in various computer vision and autonomy applications. While at Seekar, I led all engineering. I also coordinated marketing, customer acquisition, and contract development that led to an increase in net sales and company valuation each year. My cofounder and I exited the company after the autonomy technology of Seekar was acquired by a Dallas-based sensor company.
I worked in optics and things that fly on their own, primarily in autonomous systems, along with a bunch of other things I can't talk about.
I worked as an engineer on military aircraft and sensor systems. I got my feet wet developing software for the Department of Defense and gained the appreciation of what it means to engineer systems for the Warfighter.
I worked as an app developer during college, building iOS and Android mobile applications for machine learning startups and other tech businesses. Many of them are still available today. My business partner at the time was involved in the Utah State University student government, and we developed an iOS app for one project to generate more engagement in campus events. Within a week of launch, nearly 30% of on-campus students were active users, achieving nearly 80% by the end of the month.
Concentration: Multiagent and multimodal reinforcement learning
Concentration: Swarm intelligence and reinforcement learning
Minor: Mathematics
Concentration: Uncertainty estimation and autonomous navigation.
Uncertainty of the environment limits the circumstances with which any optimization problem can provide meaningful information. Multiple optimizers can combat this problem by communicating different information through cooperative coevolution. In reinforcement learning (RL), uncertainty can be reduced by applying learned policies collaboratively with another agent. Here, we propose DeadReckon to evolve an optimal policy for such scenarios. We hypothesize that self-paced co-training can allow factored particle swarms with imperfect knowledge to consolidate knowledge from each of their imperfect policies in order to approximate a single optimal policy. Additionally, we show how the performance of co-training swarms of RL agents can be maximized through the specific use of Expected SARSA as the policy learner. We evaluate DeadReckon against comparable RL algorithms and attempt to establish limits for which our procedure does and does not provide benefit. Our results indicate that Particle Swarm Optimization (PSO) is effective in training multiple agents under uncertainty and that FEA reduces swarm and policy updates. This paper contributes to the field of cooperative co-evolutionary algorithms by proposing a method by which factored evolutionary techniques can significantly improve how multiple RL agents collaborate under extreme uncertainty to solve complex tasks faster than a single agent can under identical conditions.
Since high school, I have been enthralled with trying to find order in the chaos of the markets through algorithms. A couple of years ago, I put my money where my code was and incorporated a legitimate algorithmic trading fund to pursue this interest deeper. The fund operates in full autonomy and trades through artificial intelligence software I built called Goldilox (all AI algorithms are effectively just optimization procedures, and the story of Goldilocks and the Three Bears is the original tale of optimization).
I've turned the problem of stock trading into a controls problem, leveraging control theory, Kalman filters, and reinforcement learning to take on the market with a slightly different angle.
At the time of writing, the fund is divided into five different "sub-funds", each one analyzing a different AI algorithm and its rate of return. I publish my findings on these algorithms as meaningful results are achieved. What started out as an experiment has turned into a very rewarding part-time job, with Goldilox retraining models and sifting through data while the markets are closed and it isn't trading. Goldilox uses A LOT of NLP and reinforcement learning algorithms and retrains itself several times per week with a custom federated learning pipeline--all built by me. Goldilox also contains its own back-testing framework to fully vet each algorithm under historical scenarios before deployment.
Each share of the fund is tokenized by a cryptocurrency token I built. In other words, the fund is legally tied to a smart contract. Here is a link to the smart contract: https://etherscan.io/token/0xe1ef87078afb008e17dd8c3d604999f332b2724f
Nearly every waking moment not spent on work or other projects is spent maturing Goldilox and its capabilities. I do not believe the markets can be "solved" but I do believe AI can find patterns not totally obvious to humans. https://goldilox.ai
Seekar TRACE (Target Recognition and Acquisition in Complex Environments) was a target recognition platform that was originally developed for military applications. By adding one hell of a pre-processing stack, we built a very impressive target recognition platform that could identify targets through smoke, fog, rain, forestry, cracked camera lenses and more. Even more, we were able to compress it well enough to run on edge devices like phones, tablets, microcontrollers, and drone cameras. We coupled it with radar and electrochemical sensors to build a multi-functional sensor stack capable of performing autonomous navigation for small vehicles. Eventually, we forked Seekar TRACE to also be used in medical imaging to identify difficult patterns in X-Rays. Some early highlights of the platform can be found here: https://seekartech.com/defense
This is a pet project that has turned into a significant blend of NLP, computer vision, and Deep RL. On its third iteration now, I've built and programmed a robotic arm to exactly mirror me by interpreting the movement of my left arm through a camera. There is a computer vision model using stacked hourglass networks (https://arxiv.org/pdf/1603.06937.pdf) running on an iOS app, a SARSA reinforcement learning model, and a lot of closed-feedback control loops. As a result, the project requires extreme fluency in three programming languages - Swift, Python, and C++. Eventually, I plan to have the robot arm learn new tasks (i.e. use a screwdriver) by analyzing YouTube videos and learning the actions performed.
If a human is injured, the biology of its body immediately begins repairing itself. Blood starts to clot in flesh wounds, antibodies are produced as a response to a virus, and neural structure changes to accommodate brain damage through neuroplasticity. Achieving an accurate model of artificial neuroplasticity is my ongoing focus with this project. In order to grant robotics the same liberties that humans enjoy in regards to self-repair, I show how this can effectively be accomplished through the cooperation and reorganization of two neural networks. The process takes principles from few-shot learning in Natural Language Processing (NLP - https://arxiv.org/abs/2001.07676) and general adversarial networks (GANs - https://arxiv.org/abs/1406.2661) . If a sensor sending input to an artificial neural network becomes damaged, the neural network of the damaged sensor will repurpose itself to be used for another task through self-reorganization and on-the-fly transfer learning. In other words, a robot can exhibit self-repair by borrowing from principles of biological neuroplasticity. Read more here: http://dx.doi.org/10.13140/RG.2.2.27593.06248
This is a pet project that has turned into a significant blend of NLP, computer vision, and Deep RL. On its third iteration now, I've built and programmed a robotic arm to exactly mirror me by interpreting the movement of my left arm through a camera. There is a computer vision model using stacked hourglass networks (https://arxiv.org/pdf/1603.06937.pdf) running on an iOS app, a SARSA reinforcement learning model, and a lot of closed-feedback control loops. As a result, the project requires extreme fluency in three programming languages - Swift, Python, and C++. Eventually, I plan to have the robot arm learn new tasks (i.e. use a screwdriver) by analyzing YouTube videos and learning the actions performed.
For local neuropsychology clinics, I build voice and emotion recognition software to detect neuropsychological conditions through natural language processing and computer vision. Software was updated biweekly and advised the neuropsychologist on symptoms detected in diagnostic interviews through a camera and microphone. The AI runs through a mobile app and is intended to help screen for symptoms of neurodegenerative disorders such as Alzheimer's disease and Parkinson's disease. An onboard AI agent tracks eye movement, blink rate, skeletal movement, facial expressions, and even analyzes speech for vocal indicators. The app does not need a cloud connection in order to operate and is designed under HIPAA compliant data practices. See a demo here: https://seekartech.com/healthcare
At Seekar, I built a computer vision model that was capable of identifying up to seven different respiratory conditions within chest X-rays. We leveraged significant knowledge distillation principles and "cluster models" to beat the performance of one larger model. The result was a hierarchy of much smaller models that we could compress small enough to fit on a mobile app. The app went through an ethics-board approved clinical trial and proved to be 95.5% as accurate as a radiologist in detecting the seven respiratory conditions.
Seekar donated the app as an effort to relieve physician demand during the COVID-19 pandemic.
Read more here: https://www.researchgate.net/publication/345761193_Cluster_Neural_Networks_for_Edge_Intelligence_in_Medical_ImagingSee the product: https://apps.apple.com/us/app/covid-ai/id1505887668
At Seekar, my team and I were recruited to develop a device that could be used for the autonomous detection of explosive compounds such as C4, TNT, RDX, and gunpowder. In collaboration with university professors at the University of Texas at Dallas, we were able to develop a device and sensor capable of detecting these compounds at a distance of nearly 30 feet away. Additionally, we developed a fully custom motherboard with me leading development of all firmware and software. The result was a truly novel device developed entirely in-house that incorporated electrochemical sensing and edge AI. Pairing this with radar, computer vision, and some control hardware, we will enable this technology to navigate drones through buildings in order to find explosives.
A significant research effort showing how the TrueDepth camera on an Apple iPhone--more specifically time-of-flight sensors in general--can be reprogrammed as a thermometer. We establish our hypothesis, show our experimental setup, and analyze data obtained from experiments. The results suggest that the TrueDepth sensor has thermometric reading capabilities due to the specific part of the electromagnetic spectrum that the sensor wavelength resides in.
Read more here: https://github.com/KordelFranceTech/Research-TrueDepthCameraAsThermometer
As part of an effort to attain more control over my training performance, I've developed my own framework for implementing neural networks. Activation functions, convolution, SGD, Backdrop, etc are all implemented from scratch with Numpy as the only library. It is effectively my own custom-built version of Tensorflow.
The twist I've added is that I've encoded each neuron with an identifier so I can trace the firing pattern of inference throughout the network. This allows me to see which neurons are more frequently used so that the network can perform "self-pruning" to make itself smaller and more efficient.
Find it here: https://github.com/KordelFranceTech/Academic-DeepNeuralNetsFromScratch
Skoped AI was designed for the purposes of training law enforcement and homeland security officers in mixed reality. My component, called Range Vision, was an iOS application that ran on an iPhone mounted to a rifle scope and provided augmented reality overlays for ballistic trajectory calculations. This allowed the trainee to interpret how estimated wind conditions and range would affect the drop, drift, and spin of the rifle shot by visualizing the shot before it was fired. I eventually developed a 4-degrees-of-freedom (4-DOF) calculator into the software that calculated bullet drop, drift, pitch, spin, and even aerodynamic jump. The app incorporated target recognition and included an optical rangefinder that used two cameras to measure distance instead of a laser.
I also built the software to Bluetooth pair with a Kestrel wind meter to read wind data at the source. One of the most difficult aspects of ballistics calculations is estimating wind conditions at the target so that the marksman can compensate for changing wind pattern. Our last effort on the project was to use topographical information as well as the camera zoomed through the rifle scope to predict wind at the target and increase confidence in the trajectory.
The coolest thing about this project is that it allowed me to translate a heavy dose of aerodynamics equations and mathematics into code. The product was validated at ranges up to 1200 yards, but the calculations theoretically prove it for much longer distances.
Find it here: https://apps.apple.com/us/app/range-vision/id1465211163
Part 1
Copyright © 2022 Kordel Kade France - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.