This is a cumulative work of my four years in ETH Zurich. The common umbrella topic is the development of expert-aware algorithms in computer vision - the algorithms that incorporate strong prior knowledge from field experts into modern algorithmic pipelines. Specific topics covered in the thesis are mentioned in more details below.
Transformation-invariance is crucial for many computer vision applications. It makes algorithms robust to nuisance variations in the data and makes the learning process more efficient. We developed two novel algorithms to combine the flexibility of general-purpose algorithms (CNNs and Convolutional Decision Jungles) with theoretically guaranteed transformation-invariance.
Anisotropic data set is a collection of sequential images representing a continuous evolution of structures, in which the resolution across one dimension of the stack is much lower than the resolution of the other two dimensions. Two examples are serial section transmission electron microscopy and low frame rate video data.
The key to anisotropic data analysis is to employ dependencies between the images/frames of the stack to resolve ambiguities. To find dependent regions we use registration techniques such as optical flow. We developed two novel approaches for data enhancement and neuronal segmentation.
We introduce a framework to automatically tune hyper-parameters of machine learning algorithms by employing external global statistics on the structures of interest. We focus on medical imaging problems, where these statistics naturally come from previous biological knowledge.
As an example, we develop a first whole-brain algorithm for fully automated amyloid plaques detection and analysis in cleared mouse brains. The algorithm allows to significantly reduce human envolvement, providing less subjective and more biologically-plausible results. And hopefully bringing humanity one step closer to developing a cure to Alzheimer's disease.
A for-fun and for-experience project. I played around with various simple trading strategies (arbitrage, trend following, market making), with multiple exchanges' APIs (Kraken, BTC-e, Bittrex and no longer existing Cryptsy) and with python asynchornous programming. In case you are wondering, it was slightly profitable for some time when market volatility was very high and competition was low, and afterwards it was not.
I also have a channel in telegram, where I share some thoughts and news about crypto (in Russian): https://t.me/cryptohodl.
The idea of the project was to develop a universal tuning-insensitive algorithm for mining attractive areas from the geo-tagged photos uploaded to social networks. The goal was to provide not only points, but the whole regions that can interest tourists. The project was developed for Yandex.
The goal was to develop a system that would detect, analyse and classify abnomal behavior (outliers, changes of trend) in the Internet markets. Time-series analysis algorithm involved independent analysis of trend, seasonal component, and various noise models. The project was developed for Yandex.
As a side effect of this project, I also developed a small module for Variance Gamma (VarGamma) distribution modelling. See github.
As a part of my diploma thesis I was working on short term solar activity prediction from magnetogram images. The main parts of the pipeline were detecting active regions in the image, segmenting them, and computing features from the segmented areas. The project was done in collaboration with Russian Space Research Institute and Microsoft Research Cambridge.
We developed two techniques that allow to solve signal or image segmentation problems with incorporated global constraints, such as the distribution of signal states, or the number of pixels from each segment. Subjects learned: HMM, MRF, EM-algorithm, dual decomposition, variational approximation and combinatorial optimization.