Friday, August 1, 2014

firdaus sent you an invitation

 
Top corners image
     
 
   
 
 
 

firdaus has invited you to join Twitter!

 
 
Accept invitation
 
     

Thursday, January 19, 2012

Pit-bull reviewing, the pursuit of perfection and the victims of success

(Yet) Another editorial decrying the excesses (or insufficiencies) of the peer review system:

 

 

http://www.biomedcentral.com/1741-7007/9/84

 

The main thesis of this article is that :

 

the success of a postdoctoral Fellow in finding a good academic position is perceived to depend, and to a large extent probably does depend, on his or her having published a paper in one of the three highest-profile general biology journals; but getting a paper into one of those journals can be extraordinarily difficult because - it is widely felt - referees seem to see it as their responsibility to insist on time-consuming additions and revisions, and editors are unable or unwilling to judge for themselves the justice of the referees' advice.

 

 

 

Saturday, September 11, 2010

Latex fonts for MS Office

If you have struggled with making your MS PowerPoint or Word document
look visually compatible with the fonts used for math in LaTex - you
might want to get the AMS set of LaTex fonts for MS-Office

http://www.ams.org/index/tex/amsfonts.html

Wednesday, April 21, 2010

Peer review ? Really ?

Found in a "peer-reviewed" journal Advances in Neural Computing – italics mine.

 

Functional magnetic resonance imaging (fMRI) is a new non-hurt measure technique  for  brain  activity which  have  been  used  at  the  study  of  brain  cognition,  locating  nerve activity, medicine, psychology and other domains, and has become one of the  most important way of the study of brain function

 

In general, the general linear model take Gamma function as hemodynamic response function  to  get  design  matrix.  But,  it  is  not  reasonable  in  most  actually  case.  The BOLD signal of the cerebral activation is a collective response of an activated region and it can be explained as a mutual interaction process between the neural response to  a  stimulus  and  the  hemodynamic  change  due  to  the  activation  of  a  neural  cluster.

 

 

It goes in this spirit … but need I bother ?

Monday, April 19, 2010

Self tuning spectral clustering

"Self tuning spectral clustering" is a paper in NIPS2004 by L. Zelnik-Manor and P. Perona that realizes the very intuitive idea of multi-scale (scale space) clustering:

Abstract
Spectral clustering has been theoretically analyzed and empirically proven useful. There are still open issues:
(i) Selecting the appropriate scale of analysis,
(ii) Handling multi-scale data,
(iii) Clustering with irregular background clutter, and,
(iv) Finding automatically the number of groups.
We explore and address all the above issues. We first propose that a `local' scale should be used to compute the affinity between each pair of points. This local scaling leads to better clustering especially when the data includes multiple scales and when the clusters are placed within a cluttered background. We further suggest exploiting the structure of the eigenvectors to infer automatically the number of groups. This leads to a new algorithm in which the final randomly initialized k-means stage is eliminated.


Thursday, November 12, 2009

database of data

http://www.biomedcentral.com/1756-0500/2/113

Vincent S Smith email
Natural History Museum,
Cromwell Road, London, SW7 5BD, UK


Abstract

The fabric of science is changing, driven by a revolution in digital
technologies that facilitate the acquisition and communication of
massive amounts of data. This is changing the nature of collaboration
and expanding opportunities to participate in science. If digital
technologies are the engine of this revolution, digital data are its
fuel. But for many scientific disciplines, this fuel is in short
supply. The publication of primary data is not a universal or
mandatory part of science, and despite policies and proclamations to
the contrary, calls to make data publicly available have largely gone
unheeded. In this short essay I consider why, and explore some of the
challenges that lie ahead, as we work toward a database of everything.

Tuesday, September 15, 2009

Funding in scientific research

http://www.plosbiology.org/annotation/listThread.action?inReplyTo=info%3Adoi%2F10.1371%2Fannotation%2Fdf2fbc5d-1c2c-4035-ae09-d7f800404607&root=info%3Adoi%2F10.1371%2Fannotation%2Fdf2fbc5d-1c2c-4035-ae09-d7f800404607

Thursday, June 25, 2009

Mendeley

If you're having trouble managing your bibliographies and paper / literature collections, you should try out Mendeley, probably one of the most integrated and comprehensive digital library management systems out there. Not only does it support importing citations from PubMed, ACM, etc. a la JabRef, but it also synchronizes with CiteULike. Moreover it allows synchronization across machines, sharing of libraries with other users, anywhere-access, literature trend tracking, and other such goodies. And a pretty slick interface - as icing on the cake. It appears to me, more robust and comprehensive than JabRef (which is quite excellent but unfortunately restricted to your local machine).


Wednesday, April 15, 2009

Advanced MATLAB toolbox with enhanced functionality for image processing
http://www.diplib.org/home222

Wednesday, March 18, 2009

Journal Club: Models in Immunology

http://mbi.osu.edu/seminars/mijournal.html#das

Journal Club: Models in Immunology

Organizers: Baltazar AgudaJayajit Das, and Avner Friedman

February 23, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
March 23, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
A basic introduction to signaling in our immune system

According to Dr Das: "I will keep the level of the talk at an introductory level where I will introduce the key players of our immune system and describe how they communicate and work together, ending with the key questions that remain to be understood. In the second talk I will be more specific show how mathematical modeling can help in understanding those questions in the context of T cell and NK cell signaling."

The tentative schedule of the succeeding talks is given below:

March 30, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
April 6, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
Baltazar Aguda, MBI
Cancer Immunoediting

April 13, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
April 20, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
Judy Day, MBI

May 4, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
May 11, 2009; 3:00-4:00pm; MBI Auditorium (Room 355, Jennings Hall)
C. Jayaprakash

Wednesday, August 13, 2008

Gray Matter Density of Mathematicians

Below is the abstact from the article “Increased Gray Matter Density in the Parietal Cortex of Mathematicians: A Voxel-Based Morphometry Study” by K. Aydin, A. Ucar, K.K. Oguz, O.O. Okur, A. Agayev, Z. Unal, S. Yilmaz, and C. Ozturk published in the American Journal of NeuroRadiology (AJNR), Nov-Dec 2007:

 

 

BACKGROUND AND PURPOSE:

The training to acquire or practicing to perform a skill, which may lead to structural changes in the brain, is called experience-dependent structural plasticity. The main purpose of this cross-sectional study was to investigate the presence of experience-dependent structural plasticity in mathematicians’ brains, which may develop after long-term practice of mathematic thinking.

 

MATERIALS AND METHODS:

Twenty-six volunteer mathematicians, who have been working as academicians, were enrolled in the study. We applied an optimized method of voxel-based morphometry in the mathematicians and the age- and sex-matched control subjects. We assessed the gray and white matter density differences in mathematicians and the control subjects. Moreover, the correlation between the cortical density and the time spent as an academician was investigated.

 

RESULTS:

 

We found that cortical gray matter density in the left inferior frontal and bilateral inferior parietal lobules of the mathematicians were significantly increased compared with the control subjects. Furthermore, increase in gray matter density in the right inferior parietal lobule of the mathematicians was strongly correlated with the time spent as an academician (r = 0.84; P < .01). Left-inferior frontal and bilateral parietal regions are involved in arithmetic processing. Inferior parietal regions are also involved in high-level mathematic thinking, which requires visuospatial imagery, such as mental creation and manipulation of 3D objects.

 

CONCLUSION:

 

The voxel-based morphometric analysis of mathematicians’ brains revealed increased gray matter density in the cortical regions related to mathematic thinking. The correlation between cortical density increase and the time spent as an academician suggests experience-dependent

structural plasticity in mathematicians’ brains.

Friday, August 1, 2008

Introduction to fMRI Analysis

A very good set of slides introducing fMRI analysis methods, and
statistical inference - from Prof. Vince Calhoun's course at:

http://www.ece.unm.edu/~vcalhoun/courses/fMRI_Spring07/fmricourse.htm

Also, with more emphasis on the physics and neuroscientific aspects -
H.583 at MIT:

http://ocw.mit.edu/OcwWeb/Health-Sciences-and-Technology/HST-583Fall-2004/CourseHome/index.htm

Re: Mathematics in Brain Imaging

This is good... I think the workshop at MBI in June also has some real good ones on similar topics...

raghu

On Fri, Aug 1, 2008 at 3:59 PM, firdaus.janoos <firdaus.janoos@gmail.com> wrote:
The presentations (audio + video + slides) for the summer school on Mathematics in Brain Imaging hosted by ULCA's Institute for Pure and Applied Mathematics: https://www.ipam.ucla.edu/schedule.aspx?pc=mbi2008

They will be adding more slides as the speakers send them.

There were some very good lectures/talks - esp. in the computational anatomy week (14-19 July). I've been told that the talks by Michael Miller, Sarang Joshi, Xavier Pennec and Baba Vemuri were particularly good.

Also, in the functional imaging week (21-25 July), some excellent talks, esp. on ICA, validation, MCP correction and machine learning methods.





--
Raghu Machiraju, Associate Professor,
The Ohio State University, Columbus, OH 43210

Mathematics in Brain Imaging

The presentations (audio + video + slides) for the summer school on Mathematics in Brain Imaging hosted by ULCA's Institute for Pure and Applied Mathematics: https://www.ipam.ucla.edu/schedule.aspx?pc=mbi2008

They will be adding more slides as the speakers send them.

There were some very good lectures/talks - esp. in the computational anatomy week (14-19 July). I've been told that the talks by Michael Miller, Sarang Joshi, Xavier Pennec and Baba Vemuri were particularly good.

Also, in the functional imaging week (21-25 July), some excellent talks, esp. on ICA, validation, MCP correction and machine learning methods.


Sunday, July 27, 2008

Efron on Fisher

A very interesting and thought-provoking article by Bradley Efron on the place of R.A. Fisher’s philosophies in the statistics of today, characterized by computational brute force:

 

http://www.jstor.org/pss/2676745

 

 

Friday, July 18, 2008

Regularization in Vision

In the 1985 Nature review article “Computational vision and regularization theory” (http://www.nature.com/nature/journal/v317/n6035/pdf/317314a0.pdf) by Tomaso Poggio, Vincent Torre and Christof Koch, which outlines the role of regularization methods in finding plausible solutions to computer vision problems, the authors have this very interesting speculation about biological vision:

 

“One of the mysteries of biological vision is its speed. Parallel processing has often been advocated as the answer to this problem. The model of computation provided by digital processes is, however, unsatisfactory, especially given the increasing evidence that neurones (sic) are complex devices, very different from simple digital switches. It is, therefore, interesting to consider whether the regularization approach to early vision may lead to a different type of parallel computation. We have recently suggested that linear, analog networks (either electrical or chemical) are, in fact, a natural way of solving the variational principles dictated by standard regularization theory.”

 

Not only do the authors provide a plausible reason for the computational speed for biological vision, but they also provide a compelling argument against the AI philosophy of treating intelligence and cognitive processes as separate from their biological/physical substrates.

 

 

 

Monday, July 14, 2008

Dirty Pictures

A very fundamental work on understanding and analyzing images under a strict
statistical framework, specifically an interpretation of standard image
processing problems under Markov field theory and Bayesian solutions:

On the Statistical Analysis of Dirty Pictures
by Julian Besag
Journal of the Royal Statistical Society. Series B (Methodological), Vol. 48,
No. 3 (1986), pp. 259-302


http://www.jstor.org/stable/pdfplus/2345426.pdf

ICML Discussion Page

I really like this idea. Each page has details of the paper and a discussion thread.

http://www.conflate.net/icml/

It’s a good way to get feedback and also increases accountability on the part of the authors

Shantanu

Thursday, July 3, 2008

(Super) Fast functional imaging

An interesting paper in this month’s NeuroImage on a technique called Inverse Imaging for BOLD fMRI that reports 100ms volume acquisition times from multiple-coil arrays.

 

Lin, Fa-Hsuan; Witzel, Thomas; Mandeville, Joseph B; Polimeni, Jonathan R.; Zeffiro, Thomas A.; Greve, Douglas N.; Wiggins, Graham; Wald, Lawrence L.; Belliveau, John W. “Event-related single-shot volumetric functional magnetic resonance inverse imaging of visual processing” Neuroimage, Volume 42, issue 1 (August 1, 2008), p. 230-247

 

Abstract:

 

Developments in multi-channel radio-frequency (RF) coil array technology have enabled functional magnetic resonance imaging (fMRI) with higher degrees of spatial and temporal resolution. While modest improvement in temporal acceleration has been achieved by increasing the number of RF coils, the maximum attainable acceleration in parallel MRI acqisition is intrinsically limited only by the amount of independent spatial information in the combined array channels. Since the geometric configuration of a large-n MRI head coil array is similar to that used in EEG electrode or MEG SQUID sensor arrays, the source localization algorithms used in MEG or EEG source imaging can be extended to also process MRI coil array data, resulting in greatly improved temporal resolution by minimizing k-space traversal during signal acquisition. Using a novel approach, we acquire multi-channel MRI head coil array data and then apply inverse reconstruction methods to obtain volumetric fMRI estimates of blood oxygenation level dependent (BOLD) contrast at unprecedented whole-brain acquisition rates of 100 ms. We call this combination of techniques magnetic resonance Inverse Imaging (InI), a method that provides estimates of dynamic spatially-resolved signal change that can be used to construct statistical maps of task-related brain activity. We demonstrate the sensitivity and inter-subject reliability of volumetric InI using an event-related design to probe the hemodynamic signal modulations in primary visual cortex. Robust results from both single subject and group analyses demonstrate the sensitivity and feasibility of using volumetric InI in high temporal resolution investigations of human brain function.

 

 

 

 

Tuesday, May 13, 2008

FW: ACM Computing Reviews - New Hot Topic - Nonequispaced Fast Fourier Transform

---------- Forwarded message ----------
From: Annette Cords <annette@reviews.com>
Date: Tue, May 13, 2008 at 5:05 PM
Subject: ACM Computing Reviews - New Hot Topic - Nonequispaced Fast Fourier Transform
To: raghu@cse.ohio-state.edu


The Fast Fourier transform (FFT) is one of the most influential algorithms in use today. ACM Computing Reviews is pleased to announce the release of its new Hot Topic, "The Nonequispaced FFT: An Indispensable Algorithm for Applied Science." To read our latest Hot Topic, click here:
http://www.reviews.com/hottopic/hottopic_essay_08.cfm

Written by Daniel Potts of Chemnitz University of Technology, this Hot Topic looks at the application and potential of fast Fourier transforms. The algorithm is used in MP3 data generation, for digital television and radio encoding, and much more. When the fast Fourier transform is nonequispaced, it trades exactness for specificity. The nonequispaced fast Fourier transform has become the basis for new algorithms and promises to bring automatic optimization to specific hardware, such as Blue Gene, in the future.

Hot Topics include links to related web pages, articles, and books, and are updated on a regular basis. Computing Reviews is a collaboration between Reviews.com and the Association for Computing Machinery (ACM), and can be read daily at www.reviews.com.

Saturday, April 19, 2008

Research Seminar Presentations

Here are the first four presentations from the Spring 08 research seminars:

Microscopy Image Analysis for Phenotyping Studies - Kishore Mosaliganti (link)
Deriving Biological Insights: Optical Sectioning Microscopy - Shantanu Singh (link)
Diffusion Tensor Imaging Research - Okan Irfanoglu (link)
fMRI Analysis: Methods and Challenges - Firdaus Janoos (link)

These presentations were meant to give an overview of research in our group and present some open problems to the audience.

Tuesday, March 4, 2008

Document Database

I've found CiteULike a pretty good way of organizing, storing, and sharing papers. I've created a group for ourselves to share resources. The 888/journal club papers are tagged raghu-888 and can be accessed with this link.

Tumor Micro-environment

A short, well written article about Tumor and its Micro-environment.

Sunday, February 24, 2008

Document Database

Do you have trouble keeping track of and organizing all the research papers
you download and read? Spend more time editing and arranging the
bibliography than actually writing that paper? BibTex entries drive you as
nuts as me ?

Well - there is an good piece of software called the Document Database

http://docdb.sourceforge.net/index.html - that is designed explicitly to
solve this document management nightmare.

However, this one is a bit too much for my needs - all the client-server
architecture and web-interface and what not. Sometime ago, I was looking
over someone's shoulder and they were using a really neat document
management software. It was on a Mac I think. If you know what it was -
please do let me know!

Tuesday, February 12, 2008

Level Set Methods and Dynamic Implicit Surfaces

Level Set Methods and Dynamic Implicit Surfaces
Series: Applied Mathematical Sciences , Vol. 153
Osher, Stanley, Fedkiw, Ronald
2003, XIII, 273 p. 109 illus., 24 in color., Hardcover
ISBN: 978-0-387-95482-0

The whole book is available online here.

Monday, February 11, 2008

What does an fMRI signal measure - exactly?

Nikos K. Logothetis is at the Max Planck Institute, in an article in Nature Neuroscience explains in greater detail the relationship between the effect measured by fMRI (the Blood Oxygenation Level Dependent signal), the cerebral metabolic rate of oxygen consumption (CMRO2) and the underlying neural activity

http://www.nature.com/neuro/journal/v10/n10/full/nn1007-1230.html

Friday, February 8, 2008

Matlab codes for active contour without edges (levelset based)

Somebody implemented the algorithm of active contour without edges (based Chan and Vese's paper) in Matlab at http://www.postulate.org/segmentation.php. It runs well, but is slow.

Another Matlab implementation is by Dr. Chunming Li (who is a collaborator of Dr. Kao) and is available at http://www.engr.uconn.edu/~cmli/code/. It is much faster and also has the implementation for multiple levelset functions.

Expectation Maximization

We discussed "Latent Variables Models and Learning with the EM Algorithm" today, mostly using slides from Sam Roweis' talk. The view of EM from the lower bound optimization perspective [1] is particularly interesting and is perhaps the most elucidative view of EM. The discussion in [2] is also very useful to understand extensions of EM. Of course, the canonical reference [3] is always cited and perhaps worth a read if you have a lot of patience.

We will continue the discussion next week when we will discuss the incremental version of EM [2] and revisit Kilian Pohl's work.

[1] Minka, T. (1998). Expectation-Maximization as lower bound maximization. Tutorial published on the web at http://www-white.media.mit.edu/ tpminka /papers/em.html.
[2]
Neal, R. M. and Hinton, G. E. 1999. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, M. I. Jordan, Ed. MIT Press, Cambridge, MA, 355-368.
[3] Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977


Sunday, February 3, 2008

Matlab implementation of level-set methods

A graduate researcher at UCSB has been kind enough to post his implementation of level-set techniques in 2D in MATLAB here
Looks like a good way to start tweaking and learning.

Some more biomedical image analysis software

Friday, January 25, 2008

The unemployable programmer

The blog of the IEEE Spectrum has a posting titled “Are Future US Programmers Being Taught to be Unemployable”.  This was a follow up on an article run in the Journal of Defense Software Engineering, on the state of computer science engineering. Now even though, in all times, all places, all fields, there are those who decry the state of education – how the standards of today are a shadow of those of yore, about how we’re cheapening, commercializing, debasing, dumbing-down, (put-your-adverbial-clause-of-choice-here) the education system – there is some merit to the points raised here. The authors talk about the necessity of how formal logic, formal systems, numerical analysis, algorithmic analysis form the basic toolkit of a CS engineer, and how modern teaching approaches fail to adequately impart these skills. Much of their ire is directed against Java as an instructional programming language. To that, I would add MATLAB (as dearly as I love it). To quote the blog, which in turn is a quote from another article (talk about recursive quoting):

 

Dewar says in the interview that, " 'A lot of it is, ‘Let’s make this [computer science and programming] all more fun.’ You know, ‘Math is not fun, let’s reduce math requirements. Algorithms are not fun, let’s get rid of them. Ewww – graphic libraries, they’re fun. Let’s have people mess with libraries. And [forget] all this business about ‘command line’ – we’ll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun.' "

 

While the original article is directed particularly towards undergrad schooling, this is something I would subscribe to at a larger level. I find the CS courses I take extremely light and fluffy, comparatively easy – intellectually, with little or no emphasis on theory – and mostly just “application level” programming – i.e. basically a propagation of “go use some libraries to do some fancy stuff, the math or algorithms of which you don’t need to really understand” philosophy. To get some really heavy duty lifting these days, I have to look outside the department – the first recourse is to the ECE or IND_ENG depts – and if I want to step it up some more, then the MATH department (which gets way too hard for a soft, effete, CS guy like me). For e.g. I took a machine learning course offered by the CSE dept – which was nothing more than a perusal of the “standard algorithms” available and some applications thereof. Now, I’m auditing (that’s all that I dare do) on a Statistical Learning course offered by the STATS dept, which is at a whole other level of math and analysis – hypothesis testing, estimation theory, linear operator theory, functional operator theory, etc.  And this is just an introductory/overview course – it promises to get more rigorous next quarter!

 

And I think, this is not an attribute of CS education in the US alone – but of CS education in general. The other and more established engineering disciplines IMO require a larger amount of rigour and  drilling, to be good at. One reason could be because they deal with the real, physical world, and have to develop a deep appreciation of the laws of physics that govern what they do. Unlike CS – a virtual world, where anything goes (as long as it conforms to basic logic).

 

On an aside, the article states “Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience.” This, I agree with whole heartedly. Programming the Lisp meta-circular interpreter in CSE755 was the most joy I ever had with the OSU-CSE core curriculum (minus CSE725 – Theory of Computation).

Saturday, January 19, 2008

A Weaker Cheaper MRI

The IEEE Spectrum reports on the development of an MRI machine that operates at a meagre 46 microTeslas (almost the same strength as the earth’s magnetic field , and a hundred thousandth of the field strength of conventional MRI machines, which typically operate at ~1.5Teslas). The stated advantages of these machines are:

Because it needs fewer costly magnets, a weak­magnetic-field MRI machine might cost as little as US $100 000, compared with $1 million or more for a standard MRI system ... But perhaps the most exciting thing about low-field imagers is that they can also perform another imaging technique, magneto­encephalography (MEG), .... MEG measures the magnetic fields produced by brain activity and is used to study seizures. Putting the two imaging modes together could mean matching images of brain activity from MEG with images of brain structure from MRI, and it might make for more precise brain surgery.

Low-field MRI has other advantages, says John Clarke, a physicist at the University of California, Berkeley.... “I’m personally quite excited about the idea of imaging tumors” with low-field MRI, he says. The difference between cancerous and noncancerous tissue is subtle, particularly in breast and prostate tumors, and the high-field strengths used in conventional MRI can drown out the signal. But low-field MRI will be able to detect the differences, Clarke predicts. A low-field MRI might also allow for scans during surgical procedures such as biopsies, because the weaker magnetic field would not heat up or pull at the metal biopsy needle

Now this seems a really exciting development in MRI technology – that would MRIs a practical medical device, rather than the hi-tech hi-cost curiosities they are now. And more than just the points mentioned in this article, the reason I found this technology so alluring is the potential of developing low cost, easily portable and deployable machines that can be used in the small clinics that dot the world, rather than today’s power hungry behemoths that cost a fortune to build and operate and that are available to less than 10% of the world’s population.

 

Sunday, January 13, 2008

The Princeton Companion to Mathematics

I've been reading sample articles of this book from here
If it achieves its purpose, it's a must have...

Friday, January 11, 2008

MICCAI 2008

MICCAI 2008, the 11th International Conference on Medical Image Computing and Computer Assisted Intervention, will be held from September 6 to 9, 2008 in New York City, USA. MICCAI typically attracts over 600 world leading scientists, engineers and clinicians from a wide range of disciplines associated with medical imaging and computer assisted surgery.

Topics

Topics to be addressed at MICCAI 2008 include, but are not limited to:

  • General Medical Image Computing
  • Computer Assisted Interventional Systems and Robotics
  • Visualization and Interaction
  • General Biological and Neuroscience Image Computing
  • Computational Anatomy (statistics on anatomy)
  • Computational Physiology (virtual organs)
  • Innovative Clinical and Biological Applications

Important Dates

January 20, 2008 Tutorial and workshop proposals
February 10, 2008 Acceptance of tutorials and workshops
7 March 2008 Submission of full papers
May 14, 2008 Acceptance of papers
June 9, 2008 Camera ready copy for papers
September 6 - 10, 2008 Tutorials, Conference, Workshops

Submission of Papers

We invite electronic submissions for MICCAI 2008 (LNCS style, double blind review) of up to 8-page papers for oral or poster presentation. Papers will be reviewed by members of the program review committee and assessed for quality and best means of presentation. Besides advances in methodology, we would also like to encourage submission of papers that demonstrate clinical relevance, clinical applications, and validation studies.

Proposals for Tutorials and Workshops

Tutorials will be held on September 6 and/or 9, 2008 and will complement and enhance the scientific program of MICCAI 2008. The purpose of the tutorials is to provide educational material for training new professionals in the field including students, clinicians and new researchers.

Workshops will be held on September 6 and/or 9 2008 and will provide opportunity for discussing technical and application issues in depth. The purpose of the workshops is to provide a comprehensive forum on topics which will not be fully explored during the main conference.

Executive Committee

Leon Axel, New York University, USA (General Co-Chair)
Brian Davies, Imperial College, UK (General Co-Chair)
Dimitris N Metaxas, Rutgers University, USA (General Chair)

Thursday, January 10, 2008

Imaging in Systems Biology

Sean G. Megason1, Corresponding Author Contact Information, E-mail The Corresponding Author and Scott E. Fraser1, Corresponding Author Contact Information, E-mail The Corresponding Author

1Beckman Institute and Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA

Available online 6 September 2007.

Most systems biology approaches involve determining the structure of biological circuits using genomewide “-omic” analyses. Yet imaging offers the unique advantage of watching biological circuits function over time at single-cell resolution in the intact animal. Here, we discuss the power of integrating imaging tools with more conventional -omic approaches to analyze the biological circuits of microorganisms, plants, and animals.

(link)

Tuesday, December 4, 2007

Sparse Decomposition and Modeling of Anatomical Shape Variation

Sent to you by Shantanu via Google Reader:

Recent advances in statistics have spawned powerful methods for regression and data decomposition that promote sparsity, a property that facilitates interpretation of the results. Sparse models use a small subset of the available variables and may perform as well or better than their full counterparts if constructed carefully. In most medical applications, models are required to have both good statistical performance and a relevant clinical interpretation to be of value. Morphometry of the corpus callosum is one illustrative example. This paper presents a method for relating spatial features to clinical outcome data. A set of parsimonious variables is extracted using sparse principal component analysis, producing simple yet characteristic features. The relation of these variables with clinical data is then established using a regression model. The result may be visualized as patterns of anatomical variation related to clinical outcome. In the present application, landmark-based shape data of the corpus callosum is analyzed in relation to age, gender, and clinical tests of walking speed and verbal fluency. To put the data-driven sparse principal component method into perspective, we consider two alternative techniques, one where features are derived using a model-based wavelet approach, and one where the original variables are regressed directly on the outcome.

Things you can do from here:

Possible topics for Winter 888

Gaussian Processes for Machine Learning

"Roughly speaking a stochastic process is a generalization of a probability distribution (which describes a finite-dimensional random variable) to functions. By focussing on processes which are Gaussian, it turns out that the computations required for inference and learning become relatively easy. Thus, the supervised learning problems in machine learning which can be thought of as learning a function from examples can be cast directly into the Gaussian
process framework."

The book is online.

Graphical Models

"Graphical models are a marriage between probability theory and graph theory. They provide a natural tool for dealing with two problems that occur throughout applied mathematics and engineering -- uncertainty and complexity -- and in particular they are playing an increasingly important role in the design and analysis of machine learning algorithms. Fundamental to the idea of a graphical model is the notion of modularity -- a complex system is built by combining simpler parts. Probability theory provides the glue whereby the parts are combined, ensuring that the system as a whole is consistent, and providing ways to interface models to data. The graph theoretic side of graphical models provides both an intuitively appealing interface by which humans can model highly-interacting sets of variables as well as a data structure that lends itself naturally to the design of efficient general-purpose algorithms.

Many of the classical multivariate probabalistic systems studied in fields such as statistics, systems engineering, information theory, pattern recognition and statistical mechanics are special cases of the general graphical model formalism -- examples include mixture models, factor analysis, hidden Markov models, Kalman filters and Ising models. The graphical model framework provides a way to view all of these systems as instances of a common underlying formalism. This view has many advantages -- in particular, specialized techniques that have been developed in one field can be transferred between research communities and exploited more widely. Moreover, the graphical model formalism provides a natural framework for the design of new systems." --- Michael Jordan, 1998.

Wednesday, November 28, 2007

Point Matching

Shape Contexts1 by Belongie, Malik, and Puzicha at Berkeley looks like a promising approach for finding point correspondences (along the lines of ICP, TPS-RPM, etc).

 

Give it a looksie whenever you get the time.

 

1 http://www.eecs.berkeley.edu/Research/Projects/CS/vision/shape/sc_digits.html

Monday, November 19, 2007

IEEE International Symposium on Biomedical Imaging - ISBI 2008 Call for Papers

From: IEEE SPS Lists
Sent: Monday, November 19, 2007 17:56
Subject: IEEE International Symposium on Biomedical Imaging - ISBI 2008 Call for Papers

CALL FOR PAPERS
2008 IEEE International Symposium
on Biomedical Imaging: From Nano to Macro

May 14-17, 2008
Paris Marriott Rive Gauche Hotel & Conference Center, Paris, France

** Paper Submission Deadline: December 7, 2007 **

The Fifth IEEE International Symposium on Biomedical Imaging (ISBI'08) will be held May 14-17, 2008, in Paris, France. The previous meetings have played a leading role in facilitating interaction between researchers in medical and biological imaging. The 2008 meeting will continue the tradition of fostering cross-fertilization between different imaging communities and contributing to an integrative imaging approach across all scales of observation.

ISBI 2008 is a joint initiative of the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS), with the support of Optics Valley. The meeting will feature an opening afternoon of tutorials and short courses, followed by a strong scientific program of plenary talks and special sessions as well as oral and poster presentations of peer-reviewed contributed papers. An industrial exhibition is planned.

High-quality papers are solicited containing original contributions to the algorithmic, mathematical and computational aspects of biomedical imaging, from nano- to macroscale. Topics of interest include image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological and statistical modeling. Papers on all molecular, cellular, anatomical and functional imaging modalities and applications are welcomed. All accepted papers will be published in the proceedings of the symposium and will afterwards also be made available online through the IEEExplore database.

Important Dates:
Deadline for submission of 4-page paper:
7 December 2007 (Midnight at International Date Line)

Notification of acceptance/rejection:
15 February 2008

Submission of final accepted 4-page paper:
14 March 2008

Deadline for early registration:
14 March 2008

Organizing Committee

General Chair
Jean-Christophe Olivo-Marin, Institut Pasteur, Paris, France

Program Chairs
Isabelle Bloch, ENST, Paris, France
Andrew Laine, Columbia University, NYC, USA

Special Sessions
Josiane Zerubia, INRIA, Sophia-Antipolis, France
Wiro Niessen, Erasmus Medical Ctr, Rotterdam, The Netherlands

Plenaries
Christian Roux ,ENST Bretagne, Brest, France

Tutorials
Michael Unser, EPFL, Lausanne, Switzerland

Finances
Elsa Angelini, ENST, Paris, France

Publications
Habib Benali, Inserm, Paris, France

Local Arrangements
Severine Dubuisson, Univ. Pierre et Marie Curie, Paris, France
Vannary Meas-Yedid, Institut Pasteur, Paris, France

Industrial Liaison
Spencer Shorte, Institut Pasteur, Paris, France
Nicholas Ayache, INRIA, Sophia-Antipolis, France

Institutional Liaison
Claude Boccara, ESPCI, Paris, France

Technical Liaison
Sebastian Ourselin, CSIRO, Brisbane, Australia

American Liaison
Jeff Fessler, University of Michigan, Ann Arbor, USA

Monday, November 12, 2007

Diffeomorphic deformation fields

Gary E. Christensen, Sarang C. Joshi, Michael I. Miller, "Volumetric Transformation of Brain Anatomy" IEEE Trans. Med. Imag. (1997) 

 

http://citeseer.ist.psu.edu/cache/papers/cs/25121/http:zSzzSzwww.icaen.uiowa.eduzSz~geczSzpaperszSzchristensen_tmi97.pdf/christensen97volumetric.pdf

 

 

Abstract

This paper presents diffeomorphic transformations of three-dimensional (3-D) anatomical image data of the macaque occipital lobe and whole brain cryosection imagery and of deep brain structures in human brains as imaged via magnetic resonance imagery. These transformations are generated in a hierarchical manner, accommodating both global and local anatomical detail. The initial low-dimensional registration is accomplished by constraining the transformation to be in a low-dimensional basis. The basis is defined by the Green’s function of the elasticity operator placed at predefined locations in the anatomy and the eigenfunctions of the elasticity operator. The high-dimensional large deformations are vector fields generated via the mismatch between the template and target-image volumes constrained to be the solution of a Navier–Stokes fluid model. As part of this procedure, the Jacobian of the transformation is tracked, insuring the generation of diffeomorphisms. It is shown that transformations constrained by quadratic regularization methods such as the Laplacian, biharmonic, and linear elasticity models, do not ensure that the transformation maintains topology and, therefore, must only be used for coarse global registration.

Wednesday, November 7, 2007

A history of quaternions

A very delightful account of the story, the logic and the personalities behind
Hamilton's development of quaternions and versors at
http://www.jstor.org/view/00255572/ap060385/06a00280/0

Tutorial on computational methods

A good resource for the computational aspects of stochastic theory, ode’s, pde’s and statistical mechanics (from a computational physics point of view) at:

http://homepage.univie.ac.at/franz.vesely/cp_tut/nol2h/new/index.html

by Franz J. Vesely at the University of Vienna.

Wednesday, October 31, 2007

IEEE Visualization 2007


We're at Vis 2007. Here's the program

Some interesting papers -

Visualizing Whole-Brain DTI Tractography with GPU-based Tuboids and LoD Management. Vid Petrovic, James Fallon, Falko Kuester.

Monday, October 29, 2007

Stellar presentation by Mosaliganti

I presented the use of N-point correlation functions in geometry-driven visualization process at KAV 08. The presentation was well-received and couple of the panel members did walk upto me and express their appreciation. During the panel meeting, one of the committee members brought up my paper as a special mention.

Here's a link to the presentation.

Calendar