Saturday, April 19, 2008

Research Seminar Presentations

Here are the first four presentations from the Spring 08 research seminars:

Microscopy Image Analysis for Phenotyping Studies - Kishore Mosaliganti (link)
Deriving Biological Insights: Optical Sectioning Microscopy - Shantanu Singh (link)
Diffusion Tensor Imaging Research - Okan Irfanoglu (link)
fMRI Analysis: Methods and Challenges - Firdaus Janoos (link)

These presentations were meant to give an overview of research in our group and present some open problems to the audience.

Tuesday, March 4, 2008

Document Database

I've found CiteULike a pretty good way of organizing, storing, and sharing papers. I've created a group for ourselves to share resources. The 888/journal club papers are tagged raghu-888 and can be accessed with this link.

Tumor Micro-environment

A short, well written article about Tumor and its Micro-environment.

Sunday, February 24, 2008

Document Database

Do you have trouble keeping track of and organizing all the research papers
you download and read? Spend more time editing and arranging the
bibliography than actually writing that paper? BibTex entries drive you as
nuts as me ?

Well - there is an good piece of software called the Document Database

http://docdb.sourceforge.net/index.html - that is designed explicitly to
solve this document management nightmare.

However, this one is a bit too much for my needs - all the client-server
architecture and web-interface and what not. Sometime ago, I was looking
over someone's shoulder and they were using a really neat document
management software. It was on a Mac I think. If you know what it was -
please do let me know!

Tuesday, February 12, 2008

Level Set Methods and Dynamic Implicit Surfaces

Level Set Methods and Dynamic Implicit Surfaces
Series: Applied Mathematical Sciences , Vol. 153
Osher, Stanley, Fedkiw, Ronald
2003, XIII, 273 p. 109 illus., 24 in color., Hardcover
ISBN: 978-0-387-95482-0

The whole book is available online here.

Monday, February 11, 2008

What does an fMRI signal measure - exactly?

Nikos K. Logothetis is at the Max Planck Institute, in an article in Nature Neuroscience explains in greater detail the relationship between the effect measured by fMRI (the Blood Oxygenation Level Dependent signal), the cerebral metabolic rate of oxygen consumption (CMRO2) and the underlying neural activity

http://www.nature.com/neuro/journal/v10/n10/full/nn1007-1230.html

Friday, February 8, 2008

Matlab codes for active contour without edges (levelset based)

Somebody implemented the algorithm of active contour without edges (based Chan and Vese's paper) in Matlab at http://www.postulate.org/segmentation.php. It runs well, but is slow.

Another Matlab implementation is by Dr. Chunming Li (who is a collaborator of Dr. Kao) and is available at http://www.engr.uconn.edu/~cmli/code/. It is much faster and also has the implementation for multiple levelset functions.

Expectation Maximization

We discussed "Latent Variables Models and Learning with the EM Algorithm" today, mostly using slides from Sam Roweis' talk. The view of EM from the lower bound optimization perspective [1] is particularly interesting and is perhaps the most elucidative view of EM. The discussion in [2] is also very useful to understand extensions of EM. Of course, the canonical reference [3] is always cited and perhaps worth a read if you have a lot of patience.

We will continue the discussion next week when we will discuss the incremental version of EM [2] and revisit Kilian Pohl's work.

[1] Minka, T. (1998). Expectation-Maximization as lower bound maximization. Tutorial published on the web at http://www-white.media.mit.edu/ tpminka /papers/em.html.
[2]
Neal, R. M. and Hinton, G. E. 1999. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, M. I. Jordan, Ed. MIT Press, Cambridge, MA, 355-368.
[3] Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977


Sunday, February 3, 2008

Matlab implementation of level-set methods

A graduate researcher at UCSB has been kind enough to post his implementation of level-set techniques in 2D in MATLAB here
Looks like a good way to start tweaking and learning.

Some more biomedical image analysis software

Friday, January 25, 2008

The unemployable programmer

The blog of the IEEE Spectrum has a posting titled “Are Future US Programmers Being Taught to be Unemployable”.  This was a follow up on an article run in the Journal of Defense Software Engineering, on the state of computer science engineering. Now even though, in all times, all places, all fields, there are those who decry the state of education – how the standards of today are a shadow of those of yore, about how we’re cheapening, commercializing, debasing, dumbing-down, (put-your-adverbial-clause-of-choice-here) the education system – there is some merit to the points raised here. The authors talk about the necessity of how formal logic, formal systems, numerical analysis, algorithmic analysis form the basic toolkit of a CS engineer, and how modern teaching approaches fail to adequately impart these skills. Much of their ire is directed against Java as an instructional programming language. To that, I would add MATLAB (as dearly as I love it). To quote the blog, which in turn is a quote from another article (talk about recursive quoting):

 

Dewar says in the interview that, " 'A lot of it is, ‘Let’s make this [computer science and programming] all more fun.’ You know, ‘Math is not fun, let’s reduce math requirements. Algorithms are not fun, let’s get rid of them. Ewww – graphic libraries, they’re fun. Let’s have people mess with libraries. And [forget] all this business about ‘command line’ – we’ll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun.' "

 

While the original article is directed particularly towards undergrad schooling, this is something I would subscribe to at a larger level. I find the CS courses I take extremely light and fluffy, comparatively easy – intellectually, with little or no emphasis on theory – and mostly just “application level” programming – i.e. basically a propagation of “go use some libraries to do some fancy stuff, the math or algorithms of which you don’t need to really understand” philosophy. To get some really heavy duty lifting these days, I have to look outside the department – the first recourse is to the ECE or IND_ENG depts – and if I want to step it up some more, then the MATH department (which gets way too hard for a soft, effete, CS guy like me). For e.g. I took a machine learning course offered by the CSE dept – which was nothing more than a perusal of the “standard algorithms” available and some applications thereof. Now, I’m auditing (that’s all that I dare do) on a Statistical Learning course offered by the STATS dept, which is at a whole other level of math and analysis – hypothesis testing, estimation theory, linear operator theory, functional operator theory, etc.  And this is just an introductory/overview course – it promises to get more rigorous next quarter!

 

And I think, this is not an attribute of CS education in the US alone – but of CS education in general. The other and more established engineering disciplines IMO require a larger amount of rigour and  drilling, to be good at. One reason could be because they deal with the real, physical world, and have to develop a deep appreciation of the laws of physics that govern what they do. Unlike CS – a virtual world, where anything goes (as long as it conforms to basic logic).

 

On an aside, the article states “Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience.” This, I agree with whole heartedly. Programming the Lisp meta-circular interpreter in CSE755 was the most joy I ever had with the OSU-CSE core curriculum (minus CSE725 – Theory of Computation).

Saturday, January 19, 2008

A Weaker Cheaper MRI

The IEEE Spectrum reports on the development of an MRI machine that operates at a meagre 46 microTeslas (almost the same strength as the earth’s magnetic field , and a hundred thousandth of the field strength of conventional MRI machines, which typically operate at ~1.5Teslas). The stated advantages of these machines are:

Because it needs fewer costly magnets, a weak­magnetic-field MRI machine might cost as little as US $100 000, compared with $1 million or more for a standard MRI system ... But perhaps the most exciting thing about low-field imagers is that they can also perform another imaging technique, magneto­encephalography (MEG), .... MEG measures the magnetic fields produced by brain activity and is used to study seizures. Putting the two imaging modes together could mean matching images of brain activity from MEG with images of brain structure from MRI, and it might make for more precise brain surgery.

Low-field MRI has other advantages, says John Clarke, a physicist at the University of California, Berkeley.... “I’m personally quite excited about the idea of imaging tumors” with low-field MRI, he says. The difference between cancerous and noncancerous tissue is subtle, particularly in breast and prostate tumors, and the high-field strengths used in conventional MRI can drown out the signal. But low-field MRI will be able to detect the differences, Clarke predicts. A low-field MRI might also allow for scans during surgical procedures such as biopsies, because the weaker magnetic field would not heat up or pull at the metal biopsy needle

Now this seems a really exciting development in MRI technology – that would MRIs a practical medical device, rather than the hi-tech hi-cost curiosities they are now. And more than just the points mentioned in this article, the reason I found this technology so alluring is the potential of developing low cost, easily portable and deployable machines that can be used in the small clinics that dot the world, rather than today’s power hungry behemoths that cost a fortune to build and operate and that are available to less than 10% of the world’s population.

 

Sunday, January 13, 2008

The Princeton Companion to Mathematics

I've been reading sample articles of this book from here
If it achieves its purpose, it's a must have...

Friday, January 11, 2008

MICCAI 2008

MICCAI 2008, the 11th International Conference on Medical Image Computing and Computer Assisted Intervention, will be held from September 6 to 9, 2008 in New York City, USA. MICCAI typically attracts over 600 world leading scientists, engineers and clinicians from a wide range of disciplines associated with medical imaging and computer assisted surgery.

Topics

Topics to be addressed at MICCAI 2008 include, but are not limited to:

  • General Medical Image Computing
  • Computer Assisted Interventional Systems and Robotics
  • Visualization and Interaction
  • General Biological and Neuroscience Image Computing
  • Computational Anatomy (statistics on anatomy)
  • Computational Physiology (virtual organs)
  • Innovative Clinical and Biological Applications

Important Dates

January 20, 2008 Tutorial and workshop proposals
February 10, 2008 Acceptance of tutorials and workshops
7 March 2008 Submission of full papers
May 14, 2008 Acceptance of papers
June 9, 2008 Camera ready copy for papers
September 6 - 10, 2008 Tutorials, Conference, Workshops

Submission of Papers

We invite electronic submissions for MICCAI 2008 (LNCS style, double blind review) of up to 8-page papers for oral or poster presentation. Papers will be reviewed by members of the program review committee and assessed for quality and best means of presentation. Besides advances in methodology, we would also like to encourage submission of papers that demonstrate clinical relevance, clinical applications, and validation studies.

Proposals for Tutorials and Workshops

Tutorials will be held on September 6 and/or 9, 2008 and will complement and enhance the scientific program of MICCAI 2008. The purpose of the tutorials is to provide educational material for training new professionals in the field including students, clinicians and new researchers.

Workshops will be held on September 6 and/or 9 2008 and will provide opportunity for discussing technical and application issues in depth. The purpose of the workshops is to provide a comprehensive forum on topics which will not be fully explored during the main conference.

Executive Committee

Leon Axel, New York University, USA (General Co-Chair)
Brian Davies, Imperial College, UK (General Co-Chair)
Dimitris N Metaxas, Rutgers University, USA (General Chair)

Thursday, January 10, 2008

Imaging in Systems Biology

Sean G. Megason1, Corresponding Author Contact Information, E-mail The Corresponding Author and Scott E. Fraser1, Corresponding Author Contact Information, E-mail The Corresponding Author

1Beckman Institute and Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA

Available online 6 September 2007.

Most systems biology approaches involve determining the structure of biological circuits using genomewide “-omic” analyses. Yet imaging offers the unique advantage of watching biological circuits function over time at single-cell resolution in the intact animal. Here, we discuss the power of integrating imaging tools with more conventional -omic approaches to analyze the biological circuits of microorganisms, plants, and animals.

(link)

Tuesday, December 4, 2007

Sparse Decomposition and Modeling of Anatomical Shape Variation

Sent to you by Shantanu via Google Reader:

Recent advances in statistics have spawned powerful methods for regression and data decomposition that promote sparsity, a property that facilitates interpretation of the results. Sparse models use a small subset of the available variables and may perform as well or better than their full counterparts if constructed carefully. In most medical applications, models are required to have both good statistical performance and a relevant clinical interpretation to be of value. Morphometry of the corpus callosum is one illustrative example. This paper presents a method for relating spatial features to clinical outcome data. A set of parsimonious variables is extracted using sparse principal component analysis, producing simple yet characteristic features. The relation of these variables with clinical data is then established using a regression model. The result may be visualized as patterns of anatomical variation related to clinical outcome. In the present application, landmark-based shape data of the corpus callosum is analyzed in relation to age, gender, and clinical tests of walking speed and verbal fluency. To put the data-driven sparse principal component method into perspective, we consider two alternative techniques, one where features are derived using a model-based wavelet approach, and one where the original variables are regressed directly on the outcome.

Things you can do from here:

Possible topics for Winter 888

Gaussian Processes for Machine Learning

"Roughly speaking a stochastic process is a generalization of a probability distribution (which describes a finite-dimensional random variable) to functions. By focussing on processes which are Gaussian, it turns out that the computations required for inference and learning become relatively easy. Thus, the supervised learning problems in machine learning which can be thought of as learning a function from examples can be cast directly into the Gaussian
process framework."

The book is online.

Graphical Models

"Graphical models are a marriage between probability theory and graph theory. They provide a natural tool for dealing with two problems that occur throughout applied mathematics and engineering -- uncertainty and complexity -- and in particular they are playing an increasingly important role in the design and analysis of machine learning algorithms. Fundamental to the idea of a graphical model is the notion of modularity -- a complex system is built by combining simpler parts. Probability theory provides the glue whereby the parts are combined, ensuring that the system as a whole is consistent, and providing ways to interface models to data. The graph theoretic side of graphical models provides both an intuitively appealing interface by which humans can model highly-interacting sets of variables as well as a data structure that lends itself naturally to the design of efficient general-purpose algorithms.

Many of the classical multivariate probabalistic systems studied in fields such as statistics, systems engineering, information theory, pattern recognition and statistical mechanics are special cases of the general graphical model formalism -- examples include mixture models, factor analysis, hidden Markov models, Kalman filters and Ising models. The graphical model framework provides a way to view all of these systems as instances of a common underlying formalism. This view has many advantages -- in particular, specialized techniques that have been developed in one field can be transferred between research communities and exploited more widely. Moreover, the graphical model formalism provides a natural framework for the design of new systems." --- Michael Jordan, 1998.

Wednesday, November 28, 2007

Point Matching

Shape Contexts1 by Belongie, Malik, and Puzicha at Berkeley looks like a promising approach for finding point correspondences (along the lines of ICP, TPS-RPM, etc).

 

Give it a looksie whenever you get the time.

 

1 http://www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/sc_digits.html

Monday, November 19, 2007

IEEE International Symposium on Biomedical Imaging - ISBI 2008 Call for Papers

From: IEEE SPS Lists
Sent: Monday, November 19, 2007 17:56
Subject: IEEE International Symposium on Biomedical Imaging - ISBI 2008 Call for Papers

CALL FOR PAPERS
2008 IEEE International Symposium
on Biomedical Imaging: From Nano to Macro

May 14-17, 2008
Paris Marriott Rive Gauche Hotel & Conference Center, Paris, France

** Paper Submission Deadline: December 7, 2007 **

The Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08) will be held May 14-17, 2008, in Paris, France. The previous meetings have played a leading role in facilitating interaction between researchers in medical and biological imaging. The 2008 meeting will continue the tradition of fostering cross-fertilization between different imaging communities and contributing to an integrative imaging approach across all scales of observation.

ISBI 2008 is a joint initiative of the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS), with the support of Optics Valley. The meeting will feature an opening afternoon of tutorials and short courses, followed by a strong scientific program of plenary talks and special sessions as well as oral and poster presentations of peer-reviewed contributed papers. An industrial exhibition is planned.

High-quality papers are solicited containing original contributions to the algorithmic, mathematical and computational aspects of biomedical imaging, from nano- to macroscale. Topics of interest include image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological and statistical modeling. Papers on all molecular, cellular, anatomical and functional imaging modalities and applications are welcomed. All accepted papers will be published in the proceedings of the symposium and will afterwards also be made available online through the IEEExplore database.

Important Dates:
Deadline for submission of 4-page paper:
7 December 2007 (Midnight at International Date Line)

Notification of acceptance/rejection:
15 February 2008

Submission of final accepted 4-page paper:
14 March 2008

Deadline for early registration:
14 March 2008

Organizing Committee

General Chair
Jean-Christophe Olivo-Marin, Institut Pasteur, Paris, France

Program Chairs
Isabelle Bloch, ENST, Paris, France
Andrew Laine, Columbia University, NYC, USA

Special Sessions
Josiane Zerubia, INRIA, Sophia-Antipolis, France
Wiro Niessen, Erasmus Medical Ctr, Rotterdam, The Netherlands

Plenaries
Christian Roux ,ENST Bretagne, Brest, France

Tutorials
Michael Unser, EPFL, Lausanne, Switzerland

Finances
Elsa Angelini, ENST, Paris, France

Publications
Habib Benali, Inserm, Paris, France

Local Arrangements
Severine Dubuisson, Univ. Pierre et Marie Curie, Paris, France
Vannary Meas-Yedid, Institut Pasteur, Paris, France

Industrial Liaison
Spencer Shorte, Institut Pasteur, Paris, France
Nicholas Ayache, INRIA, Sophia-Antipolis, France

Institutional Liaison
Claude Boccara, ESPCI, Paris, France

Technical Liaison
Sebastian Ourselin, CSIRO, Brisbane, Australia

American Liaison
Jeff Fessler, University of Michigan, Ann Arbor, USA

Monday, November 12, 2007

Diffeomorphic deformation fields

Gary E. Christensen, Sarang C. Joshi, Michael I. Miller, "Volumetric Transformation of Brain Anatomy" IEEE Trans. Med. Imag. (1997) 

 

http://citeseer.ist.psu.edu/cache/papers/cs/25121/http:zSzzSzwww.icaen.uiowa.eduzSz~geczSzpaperszSzchristensen_tmi97.pdf/christensen97volumetric.pdf

 

 

Abstract

This paper presents diffeomorphic transformations of three-dimensional (3-D) anatomical image data of the macaque occipital lobe and whole brain cryosection imagery and of deep brain structures in human brains as imaged via magnetic resonance imagery. These transformations are generated in a hierarchical manner, accommodating both global and local anatomical detail. The initial low-dimensional registration is accomplished by constraining the transformation to be in a low-dimensional basis. The basis is defined by the Green’s function of the elasticity operator placed at predefined locations in the anatomy and the eigenfunctions of the elasticity operator. The high-dimensional large deformations are vector fields generated via the mismatch between the template and target-image volumes constrained to be the solution of a Navier–Stokes fluid model. As part of this procedure, the Jacobian of the transformation is tracked, insuring the generation of diffeomorphisms. It is shown that transformations constrained by quadratic regularization methods such as the Laplacian, biharmonic, and linear elasticity models, do not ensure that the transformation maintains topology and, therefore, must only be used for coarse global registration.

Wednesday, November 7, 2007

A history of quaternions

A very delightful account of the story, the logic and the personalities behind
Hamilton's development of quaternions and versors at
http://www.jstor.org/view/00255572/ap060385/06a00280/0

Tutorial on computational methods

A good resource for the computational aspects of stochastic theory, ode’s, pde’s and statistical mechanics (from a computational physics point of view) at:

http://homepage.univie.ac.at/franz.vesely/cp_tut/nol2h/new/index.html

by Franz J. Vesely at the University of Vienna.

Wednesday, October 31, 2007

IEEE Visualization 2007


We're at Vis 2007. Here's the program

Some interesting papers -

Visualizing Whole-Brain DTI Tractography with GPU-based Tuboids and LoD Management. Vid Petrovic, James Fallon, Falko Kuester.

Monday, October 29, 2007

Stellar presentation by Mosaliganti

I presented the use of N-point correlation functions in geometry-driven visualization process at KAV 08. The presentation was well-received and couple of the panel members did walk upto me and express their appreciation. During the panel meeting, one of the committee members brought up my paper as a special mention.

Here's a link to the presentation.

Calendar