Praying Mantis have a form of 3D vision useful for Computer Vision AI algorithms

https://futurism.com/?p=122293

— excerpt below —

Scientists have gained new insight into the way praying mantises see the world, and this knowledge could potentially open up new avenues for computer vision.

Unlike other insects, praying mantises have a pair of large, forward-facing eyes. Humans and other primates use this kind of stereo sight setup to compare two slightly different viewpoints in order to gauge depth. However, it seems that praying mantises see things differently than we do.

Using beeswax as an adhesive, a team led by Vivek Nityananda at the University of Newcastle affixed lenses to praying mantises’ faces, being careful not to cause injury. One lens was green and the other was blue, a setup that allowed the scientists to control what each eye could see.

The scientists then projected films onto a screen in front of the insects. The first film featured a moving dot, which the mantises attacked, demonstrating that they could perceive depth if an object moved. Then, the dot was manipulated to move in two different directions, a disparity that would prevent human eyes from comprehending the image, but the mantises still attacked it.

This suggests that mantises have a previously unknown type of vision. It relies on targets moving around, but those movements don’t necessarily have to match between one eye and the other. It’s based on motion over time, rather than a direct comparison.

Being insects, mantises have fewer than a million neurons, far fewer than the 85 billion possessed by humans. However, thanks to this unique form of vision they use, they can still see in three dimensions, just like we can.

The researchers noted in a press release that their discovery could lead to the development of an algorithm based on mantis vision. Small robots, such as those used to respond to disasters, could use this algorithm to assess their surroundings without the need for a sophisticated “brain.”

Advertisements

Cancer drug has Anti ageing properties

https://futurism.com/?p=113944

A Key Enzyme
A new study has been able to extend the lifespan of worms and flies by inhibiting RNA polymerase III (Pol III). Since the enzyme is common to all animal species, including humans, researchers hope the discovery could lead to groundbreaking new therapies.

Researchers have long known that Pol III plays a key role in cell growth and the production of proteins, but recent insights revealed that when its activity was reduced during adulthood, the survival of yeast cells (as well as the longevity of flies and worms) could be extended by an average of 10 percent.

“We’ve uncovered a fundamental role for Pol III in adult flies and worms: its activity negatively impacts stem cell function, gut health and the animal’s survival,” commented first author Danny Filer of the UCL Institute of Healthy Ageing, in a press release. “When we inhibit its activity, we can improve all these. As Pol III has the same structure and function across species, we think its role in mammals and humans warrants investigation as it may lead to important therapies.”

Yeast, flies, and worms were selected for the study as they are not closely related, but all bear the enzyme. Various techniques, including insertional mutagenesis and RNA mediated interference, were used to inhibit Pol III and observe the results.

When it was inhibited in the gut of flies and worms, they lived longer. This was also the case when it was inhibited in only the flies’ intestinal stem cells.

Extending Life
The results of Pol III inhibition have been compared to reactions to the immune-suppressing drug rapamycin, which is taken by cancer patients and organ transplant recipients. The drug has previously been shown to extend the lifespan of dogs. This latest study could help researchers better understanding exactly how rapamycin actually works.

“We now think that Pol III promotes growth and accelerates aging in response to a signal inhibited by rapamycin and that inhibiting Pol III is sufficient to result in flies living longer as if they were given rapamycin,” said co-author Dr. Nazif Alic. “If we can investigate this mechanism further and across a wider range of species, we can develop targeted antiaging therapies.”

The rapamycin compound was first discovered on Easter Island and has since been used to create drugs capable of extending the lives of several species. However, there hasn’t been a study of its effects on human subjects — at least not yet.

Gaining a further understanding of the mechanism behind rapamycin could certainly make the idea of a human trial more tenable. The team plans to continue their research into how inhibiting Pol III effects an adult organism, and why doing so results in a longer lifespan. An anti-aging pill is still a long way off, but this type of research could provide some key foundational knowledge.

new study has overturned a hundred-year-old assumption on what exactly makes a neuron ‘fire’

https://futurism.com/?p=116607

—-

The human brain contains a little over 80-odd billion neurons, each joining with other cells to create trillions of connections called synapses.

The numbers are mind-boggling, but the way each individual nerve cell contributes to the brain’s functions is still an area of contention. A new study has overturned a hundred-year-old assumption on what exactly makes a neuron ‘fire’, posing new mechanisms behind certain neurological disorders.

A team of physicists from Bar-Ilan University in Israel conducted experiments on rat neurons grown in a culture to determine exactly how a neuron responds to the signals it receives from other cells.

To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell’s membrane increases as a current is applied.

Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane’s voltage resets.

What this means is a neuron won’t send a message unless it collects a strong enough signal.

Lapique’s equations weren’t the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes.

Image credit: NICHD/Flickr
According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it’s accurate.

“We reached this conclusion using a new experimental setup, but in principle these results could have been discovered using technology that has existed since the 1980s,” says lead researcher Ido Kanter.

“The belief that has been rooted in the scientific world for 100 years resulted in this delay of several decades.”

The experiments approached the question from two angles – one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve’s firing.

Their results suggest the direction of a received signal can make all the difference in how a neuron responds.

A weak signal from the left arriving with a weak signal from the right won’t combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message.

This potentially new way of describing what’s known as spatial summation could lead to a novel method of categorising neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction.

Better yet, it could even lead to discoveries that explain certain neurological disorders.

It’s important not to throw out a century of wisdom on the topic on the back of a single study. The researchers also admit they’ve only looked at a type of nerve cell called pyramidal neurons, leaving plenty of room for future experiments.

But fine-tuning our understanding of how individual units combine to produce complex behaviours could spread into other areas of research. With neural networks inspiring future computational technology, identifying any new talents in brain cells could have some rather interesting applications.

This research was published in Scientific Reports.

The Wild Week in AI – Andrew Ng’s new Manufacturing AI company; Google’s China Lab; Deep Learning Trends Tutorial; and more | Revue

https://www.getrevue.co/profile/wildml/issues/the-wild-week-in-ai-andrew-ng-s-new-manufacturing-ai-company-google-s-china-lab-deep-learning-trends-tutorial-and-more-87334

— except below —

The Wild Week in AI – Andrew Ng’s new Manufacturing AI company; Google’s China Lab; Deep Learning Trends Tutorial; and more

The Wild Week in AI
The Wild Week in AI
December 18 · Issue #72 · View online
The Wild Week in AI is a weekly AI & Deep Learning newsletter curated by @dennybritz.
If you enjoy the newsletter, please consider sharing it on Twitter, Facebook, etc! Really appreciate the support 🙂
This Week’s Sponsor: Butterfly Network
Butterfly has built a new hand-held Ultrasound device that will revolutionize health care: the Butterfly iQ. This Ultrasound device fits in your pocket, connects to your smart-phone and stores medical data securely in the cloud.
Butterfly’s machine learning team works on building intelligence into the device to help clinicians make life-saving decisions. Butterfly is looking for researchers interested in continuing to develop and publish new machine learning algorithms while also having a direct and immense, real-world impact. Find out more at butterflynetwork.com.
News
Andrew Ng launches AI + Manufacturing Startup
MEDIUM.COM – Share
Founded by famous professor Andrew Ng, Landing.ai is a new Artificial Intelligence company focused on the manufacturing industry. At this point, it is still unclear what kind of products landing.ai is working on.
Google opens AI Center in Beijing, China
WWW.BLOOMBERG.COM – Share
The Google AI China Center will have a small group of researchers supported by several hundred China-based engineers. “It will be a small team focused on advancing basic AI research in publications, academic conferences, and knowledge exchange,“ said Fei-Fei Li, the chief scientist at Google’s cloud unit who will lead the Beijing research center.
AlphaGo Teach: Discover new and creative ways of playing Go
ALPHAGOTEACH.DEEPMIND.COM – Share
This tool provides analysis of 6,000 of the most popular opening sequences from the recent history of Go, using data from 231,000 human games and 75 games AlphaGo played against human players.
AI-Assisted Fake Adult Videos
MOTHERBOARD.VICE.COM – Share
Someone used a Machine Learning algorithm to paste the face of ‘Wonder Woman’ star Gal Gadot onto an adult video. It’s not going to fool anyone who looks closely. Sometimes the face doesn’t track correctly and there’s an uncanny valley effect at play, but at a glance, it seems believable.
Posts, Articles, Tutorials
Deep Learning: Practice and Trends (NIPS 2017 Tutorial)
WWW.YOUTUBE.COM – Share
An excellent tutorial on the building blocks of today’s Deep Learning systems. The tutorial covers Convolutional Models, Autoregressive Models, Domain Alignment, Meta Learning, Graph Networks, and more.
Deep Learning for NLP – Advancements and Trends in 2017
TRYOLABS.COM – Share
A good summary of Deep Learning advancements for NLP in 2017. This post covers, pre-trained word embeddings, the sentiment neuron, SemEval 2017 results, abstractive summarization systems, unsupervised Machine Translation, and more.
Introduction to Gaussian Processes
BRIDG.LAND – Share
Gaussian processes may not be at the center of current machine learning hype but are still used at the forefront of research – they were recently seen automatically tuning the MCTS hyperparameters for AlphaGo Zero for instance.
Training Sequence Models with Attention
AWNI.GITHUB.IO – Share
Several practical tips for training sequence-to-sequence models with attention, such as those used in Machine Translation, or text summarization.
Code, Projects & Data
MAgent Platform for Many-agent Reinforcement Learning
GITHUB.COM – Share
MAgent is a research platform for many-agent reinforcement learning. Unlike previous research platforms that focus on reinforcement learning research with a single agent or few agents, MAgent aims at supporting reinforcement learning research that scales up from hundreds to millions of agents.
Visual to Sound: Generating Natural Sound for Videos in the Wild
BVISION11.CS.UNC.EDU – Share
In this paper and project, the authors pose generate sound given visual input and apply learning-based methods to generate raw waveform samples given input video frames.
Exploring the ChestXray14 dataset: Problems
LUKEOAKDENRAYNER.WORDPRESS.COM – Share
A detailed analysis of the ChestXray14 dataset, and why it may not be fit for training medical AI systems to do diagnostic work. Such analyses of real-world datasets are extremely important, and I hope to see more of them in the future.
Highlighted Research Papers
Libratus AI for heads-up no-limit poker (Science)
SCIENCE.SCIENCEMAG.ORG – Share
The authors present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. The game-theoretic approach uses application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy.
[1712.03351] Peephole: Predicting Network Performance Before Training
ARXIV.ORG – Share
An approach to predict the performance of a network before training, based on its architecture. The authors develop a way to encode individual layers into vectors and bring them together to form an integrated description via LSTM. Taking advantage of the recurrent network’s expressive power, this method can reliably predict the performances of various network architectures.
[1712.04741] Mathematics of Deep Learning
ARXIV.ORG – Share
Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations.
Did you enjoy this issue?
Thumbs up 1ae5a7bdfcd3220e2b376aa0c1607bc5edaba758e5dd83b482d03965219a220b Thumbs down e13779fa29e2935b47488fb8f82977fedcf689a0cc0cc3c19fa3c6bb14d1493b

Carefully curated by The Wild Week in AI with Revue.
If you were forwarded this newsletter and you like it, you can subscribe here.
If you don’t want these updates anymore, please unsubscribe here.

Share to Twitter Share to Facebook
Subscribe to my list

Your email address…
Subscribe now
This thingy has to be empti pwieshh

Proudly crafted with