<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ivan Bocharov | BIASlab</title><link>http://biaslab.org/author/ivan-bocharov/</link><atom:link href="http://biaslab.org/author/ivan-bocharov/index.xml" rel="self" type="application/rss+xml"/><description>Ivan Bocharov</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sat, 26 Jun 2021 16:37:31 +0200</lastBuildDate><item><title>Extended Variational Message Passing for Automated Approximate Bayesian Inference</title><link>http://biaslab.org/publication/extended-variational-message-passing-for-automated-approximate-bayesian-inference/</link><pubDate>Sat, 26 Jun 2021 16:37:31 +0200</pubDate><guid>http://biaslab.org/publication/extended-variational-message-passing-for-automated-approximate-bayesian-inference/</guid><description/></item><item><title>ForneyLab: A Toolbox for Biologically Plausible Free Energy Minimization in Dynamic Neural Models</title><link>http://biaslab.org/publication/forneylab-biologically-plausible-fem/</link><pubDate>Sun, 23 Sep 2018 13:42:00 +0200</pubDate><guid>http://biaslab.org/publication/forneylab-biologically-plausible-fem/</guid><description/></item><item><title>Acoustic scene classification from few examples</title><link>http://biaslab.org/publication/asc-from-few-examples/</link><pubDate>Sun, 09 Sep 2018 09:07:00 +0200</pubDate><guid>http://biaslab.org/publication/asc-from-few-examples/</guid><description/></item><item><title>K-shot learning of acoustic context</title><link>http://biaslab.org/publication/k-shot-learning-acoustic-context/</link><pubDate>Fri, 08 Dec 2017 16:37:31 +0200</pubDate><guid>http://biaslab.org/publication/k-shot-learning-acoustic-context/</guid><description/></item><item><title>CoHear</title><link>http://biaslab.org/project/cohear/</link><pubDate>Wed, 01 Feb 2017 00:00:00 +0000</pubDate><guid>http://biaslab.org/project/cohear/</guid><description>&lt;p&gt;&lt;strong&gt;Hearing loss is a very serious health condition that has been associated with early dementia and cognitive decline. Still, hearing aid market penetration is quite low, in particular for the large population that is afflicted with ‘mild-to-moderate’ hearing impairment. This is mainly due to two reasons: Stigma (association with old age) and hearing aids (HA) sound quality. The recent commercial introduction of fashionable ‘hearables’ will likely alleviate the stigma issue. Recent advances in machine learning and cloud computing open new avenues for attacking the sound quality issue for hearing aids. In this project, we intend to develop a (crowd-based) collaborative design approach to improving the sound quality issues for the mild-to-moderately hearing-impaired population.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We expect to build a working prototype for collaboratively designed hearing algorithms that can be applied to a new class of ‘smart hearing devices’ with high appeal to the mild-to-moderately hearing-impaired patient. As an additional benefit, we hope that our technology will ease the transition from hearables to professional hearing aid technology for the moderate-to-profound hearing-impaired population.&lt;/p&gt;</description></item></channel></rss>