Wednesday, August 13, 2008

Gray Matter Density of Mathematicians

Below is the abstact from the article “Increased Gray Matter Density in the Parietal Cortex of Mathematicians: A Voxel-Based Morphometry Study” by K. Aydin, A. Ucar, K.K. Oguz, O.O. Okur, A. Agayev, Z. Unal, S. Yilmaz, and C. Ozturk published in the American Journal of NeuroRadiology (AJNR), Nov-Dec 2007:

 

 

BACKGROUND AND PURPOSE:

The training to acquire or practicing to perform a skill, which may lead to structural changes in the brain, is called experience-dependent structural plasticity. The main purpose of this cross-sectional study was to investigate the presence of experience-dependent structural plasticity in mathematicians’ brains, which may develop after long-term practice of mathematic thinking.

 

MATERIALS AND METHODS:

Twenty-six volunteer mathematicians, who have been working as academicians, were enrolled in the study. We applied an optimized method of voxel-based morphometry in the mathematicians and the age- and sex-matched control subjects. We assessed the gray and white matter density differences in mathematicians and the control subjects. Moreover, the correlation between the cortical density and the time spent as an academician was investigated.

 

RESULTS:

 

We found that cortical gray matter density in the left inferior frontal and bilateral inferior parietal lobules of the mathematicians were significantly increased compared with the control subjects. Furthermore, increase in gray matter density in the right inferior parietal lobule of the mathematicians was strongly correlated with the time spent as an academician (r = 0.84; P < .01). Left-inferior frontal and bilateral parietal regions are involved in arithmetic processing. Inferior parietal regions are also involved in high-level mathematic thinking, which requires visuospatial imagery, such as mental creation and manipulation of 3D objects.

 

CONCLUSION:

 

The voxel-based morphometric analysis of mathematicians’ brains revealed increased gray matter density in the cortical regions related to mathematic thinking. The correlation between cortical density increase and the time spent as an academician suggests experience-dependent

structural plasticity in mathematicians’ brains.

Friday, August 1, 2008

Introduction to fMRI Analysis

A very good set of slides introducing fMRI analysis methods, and
statistical inference - from Prof. Vince Calhoun's course at:

http://www.ece.unm.edu/~vcalhoun/courses/fMRI_Spring07/fmricourse.htm

Also, with more emphasis on the physics and neuroscientific aspects -
H.583 at MIT:

http://ocw.mit.edu/OcwWeb/Health-Sciences-and-Technology/HST-583Fall-2004/CourseHome/index.htm

Re: Mathematics in Brain Imaging

This is good... I think the workshop at MBI in June also has some real good ones on similar topics...

raghu

On Fri, Aug 1, 2008 at 3:59 PM, firdaus.janoos <firdaus.janoos@gmail.com> wrote:
The presentations (audio + video + slides) for the summer school on Mathematics in Brain Imaging hosted by ULCA's Institute for Pure and Applied Mathematics: https://www.ipam.ucla.edu/schedule.aspx?pc=mbi2008

They will be adding more slides as the speakers send them.

There were some very good lectures/talks - esp. in the computational anatomy week (14-19 July). I've been told that the talks by Michael Miller, Sarang Joshi, Xavier Pennec and Baba Vemuri were particularly good.

Also, in the functional imaging week (21-25 July), some excellent talks, esp. on ICA, validation, MCP correction and machine learning methods.





--
Raghu Machiraju, Associate Professor,
The Ohio State University, Columbus, OH 43210

Mathematics in Brain Imaging

The presentations (audio + video + slides) for the summer school on Mathematics in Brain Imaging hosted by ULCA's Institute for Pure and Applied Mathematics: https://www.ipam.ucla.edu/schedule.aspx?pc=mbi2008

They will be adding more slides as the speakers send them.

There were some very good lectures/talks - esp. in the computational anatomy week (14-19 July). I've been told that the talks by Michael Miller, Sarang Joshi, Xavier Pennec and Baba Vemuri were particularly good.

Also, in the functional imaging week (21-25 July), some excellent talks, esp. on ICA, validation, MCP correction and machine learning methods.


Sunday, July 27, 2008

Efron on Fisher

A very interesting and thought-provoking article by Bradley Efron on the place of R.A. Fisher’s philosophies in the statistics of today, characterized by computational brute force:

 

http://www.jstor.org/pss/2676745

 

 

Friday, July 18, 2008

Regularization in Vision

In the 1985 Nature review article “Computational vision and regularization theory” (http://www.nature.com/nature/journal/v317/n6035/pdf/317314a0.pdf) by Tomaso Poggio, Vincent Torre and Christof Koch, which outlines the role of regularization methods in finding plausible solutions to computer vision problems, the authors have this very interesting speculation about biological vision:

 

“One of the mysteries of biological vision is its speed. Parallel processing has often been advocated as the answer to this problem. The model of computation provided by digital processes is, however, unsatisfactory, especially given the increasing evidence that neurones (sic) are complex devices, very different from simple digital switches. It is, therefore, interesting to consider whether the regularization approach to early vision may lead to a different type of parallel computation. We have recently suggested that linear, analog networks (either electrical or chemical) are, in fact, a natural way of solving the variational principles dictated by standard regularization theory.”

 

Not only do the authors provide a plausible reason for the computational speed for biological vision, but they also provide a compelling argument against the AI philosophy of treating intelligence and cognitive processes as separate from their biological/physical substrates.

 

 

 

Monday, July 14, 2008

Dirty Pictures

A very fundamental work on understanding and analyzing images under a strict
statistical framework, specifically an interpretation of standard image
processing problems under Markov field theory and Bayesian solutions:

On the Statistical Analysis of Dirty Pictures
by Julian Besag
Journal of the Royal Statistical Society. Series B (Methodological), Vol. 48,
No. 3 (1986), pp. 259-302


http://www.jstor.org/stable/pdfplus/2345426.pdf

ICML Discussion Page

I really like this idea. Each page has details of the paper and a discussion thread.

http://www.conflate.net/icml/

It’s a good way to get feedback and also increases accountability on the part of the authors

Shantanu

Thursday, July 3, 2008

(Super) Fast functional imaging

An interesting paper in this month’s NeuroImage on a technique called Inverse Imaging for BOLD fMRI that reports 100ms volume acquisition times from multiple-coil arrays.

 

Lin, Fa-Hsuan; Witzel, Thomas; Mandeville, Joseph B; Polimeni, Jonathan R.; Zeffiro, Thomas A.; Greve, Douglas N.; Wiggins, Graham; Wald, Lawrence L.; Belliveau, John W. “Event-related single-shot volumetric functional magnetic resonance inverse imaging of visual processing” Neuroimage, Volume 42, issue 1 (August 1, 2008), p. 230-247

 

Abstract:

 

Developments in multi-channel radio-frequency (RF) coil array technology have enabled functional magnetic resonance imaging (fMRI) with higher degrees of spatial and temporal resolution. While modest improvement in temporal acceleration has been achieved by increasing the number of RF coils, the maximum attainable acceleration in parallel MRI acqisition is intrinsically limited only by the amount of independent spatial information in the combined array channels. Since the geometric configuration of a large-n MRI head coil array is similar to that used in EEG electrode or MEG SQUID sensor arrays, the source localization algorithms used in MEG or EEG source imaging can be extended to also process MRI coil array data, resulting in greatly improved temporal resolution by minimizing k-space traversal during signal acquisition. Using a novel approach, we acquire multi-channel MRI head coil array data and then apply inverse reconstruction methods to obtain volumetric fMRI estimates of blood oxygenation level dependent (BOLD) contrast at unprecedented whole-brain acquisition rates of 100 ms. We call this combination of techniques magnetic resonance Inverse Imaging (InI), a method that provides estimates of dynamic spatially-resolved signal change that can be used to construct statistical maps of task-related brain activity. We demonstrate the sensitivity and inter-subject reliability of volumetric InI using an event-related design to probe the hemodynamic signal modulations in primary visual cortex. Robust results from both single subject and group analyses demonstrate the sensitivity and feasibility of using volumetric InI in high temporal resolution investigations of human brain function.

 

 

 

 

Tuesday, May 13, 2008

FW: ACM Computing Reviews - New Hot Topic - Nonequispaced Fast Fourier Transform

---------- Forwarded message ----------
From: Annette Cords <annette@reviews.com>
Date: Tue, May 13, 2008 at 5:05 PM
Subject: ACM Computing Reviews - New Hot Topic - Nonequispaced Fast Fourier Transform
To: raghu@cse.ohio-state.edu


The Fast Fourier transform (FFT) is one of the most influential algorithms in use today. ACM Computing Reviews is pleased to announce the release of its new Hot Topic, "The Nonequispaced FFT: An Indispensable Algorithm for Applied Science." To read our latest Hot Topic, click here:
http://www.reviews.com/hottopic/hottopic_essay_08.cfm

Written by Daniel Potts of Chemnitz University of Technology, this Hot Topic looks at the application and potential of fast Fourier transforms. The algorithm is used in MP3 data generation, for digital television and radio encoding, and much more. When the fast Fourier transform is nonequispaced, it trades exactness for specificity. The nonequispaced fast Fourier transform has become the basis for new algorithms and promises to bring automatic optimization to specific hardware, such as Blue Gene, in the future.

Hot Topics include links to related web pages, articles, and books, and are updated on a regular basis. Computing Reviews is a collaboration between Reviews.com and the Association for Computing Machinery (ACM), and can be read daily at www.reviews.com.

Saturday, April 19, 2008

Research Seminar Presentations

Here are the first four presentations from the Spring 08 research seminars:

Microscopy Image Analysis for Phenotyping Studies - Kishore Mosaliganti (link)
Deriving Biological Insights: Optical Sectioning Microscopy - Shantanu Singh (link)
Diffusion Tensor Imaging Research - Okan Irfanoglu (link)
fMRI Analysis: Methods and Challenges - Firdaus Janoos (link)

These presentations were meant to give an overview of research in our group and present some open problems to the audience.

Tuesday, March 4, 2008

Document Database

I've found CiteULike a pretty good way of organizing, storing, and sharing papers. I've created a group for ourselves to share resources. The 888/journal club papers are tagged raghu-888 and can be accessed with this link.

Tumor Micro-environment

A short, well written article about Tumor and its Micro-environment.

Sunday, February 24, 2008

Document Database

Do you have trouble keeping track of and organizing all the research papers
you download and read? Spend more time editing and arranging the
bibliography than actually writing that paper? BibTex entries drive you as
nuts as me ?

Well - there is an good piece of software called the Document Database

http://docdb.sourceforge.net/index.html - that is designed explicitly to
solve this document management nightmare.

However, this one is a bit too much for my needs - all the client-server
architecture and web-interface and what not. Sometime ago, I was looking
over someone's shoulder and they were using a really neat document
management software. It was on a Mac I think. If you know what it was -
please do let me know!

Tuesday, February 12, 2008

Level Set Methods and Dynamic Implicit Surfaces

Level Set Methods and Dynamic Implicit Surfaces
Series: Applied Mathematical Sciences , Vol. 153
Osher, Stanley, Fedkiw, Ronald
2003, XIII, 273 p. 109 illus., 24 in color., Hardcover
ISBN: 978-0-387-95482-0

The whole book is available online here.

Monday, February 11, 2008

What does an fMRI signal measure - exactly?

Nikos K. Logothetis is at the Max Planck Institute, in an article in Nature Neuroscience explains in greater detail the relationship between the effect measured by fMRI (the Blood Oxygenation Level Dependent signal), the cerebral metabolic rate of oxygen consumption (CMRO2) and the underlying neural activity

http://www.nature.com/neuro/journal/v10/n10/full/nn1007-1230.html

Friday, February 8, 2008

Matlab codes for active contour without edges (levelset based)

Somebody implemented the algorithm of active contour without edges (based Chan and Vese's paper) in Matlab at http://www.postulate.org/segmentation.php. It runs well, but is slow.

Another Matlab implementation is by Dr. Chunming Li (who is a collaborator of Dr. Kao) and is available at http://www.engr.uconn.edu/~cmli/code/. It is much faster and also has the implementation for multiple levelset functions.

Expectation Maximization

We discussed "Latent Variables Models and Learning with the EM Algorithm" today, mostly using slides from Sam Roweis' talk. The view of EM from the lower bound optimization perspective [1] is particularly interesting and is perhaps the most elucidative view of EM. The discussion in [2] is also very useful to understand extensions of EM. Of course, the canonical reference [3] is always cited and perhaps worth a read if you have a lot of patience.

We will continue the discussion next week when we will discuss the incremental version of EM [2] and revisit Kilian Pohl's work.

[1] Minka, T. (1998). Expectation-Maximization as lower bound maximization. Tutorial published on the web at http://www-white.media.mit.edu/ tpminka /papers/em.html.
[2]
Neal, R. M. and Hinton, G. E. 1999. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in Graphical Models, M. I. Jordan, Ed. MIT Press, Cambridge, MA, 355-368.
[3] Arthur Dempster, Nan Laird, and Donald Rubin. "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977


Sunday, February 3, 2008

Matlab implementation of level-set methods

A graduate researcher at UCSB has been kind enough to post his implementation of level-set techniques in 2D in MATLAB here
Looks like a good way to start tweaking and learning.

Some more biomedical image analysis software

Friday, January 25, 2008

The unemployable programmer

The blog of the IEEE Spectrum has a posting titled “Are Future US Programmers Being Taught to be Unemployable”.  This was a follow up on an article run in the Journal of Defense Software Engineering, on the state of computer science engineering. Now even though, in all times, all places, all fields, there are those who decry the state of education – how the standards of today are a shadow of those of yore, about how we’re cheapening, commercializing, debasing, dumbing-down, (put-your-adverbial-clause-of-choice-here) the education system – there is some merit to the points raised here. The authors talk about the necessity of how formal logic, formal systems, numerical analysis, algorithmic analysis form the basic toolkit of a CS engineer, and how modern teaching approaches fail to adequately impart these skills. Much of their ire is directed against Java as an instructional programming language. To that, I would add MATLAB (as dearly as I love it). To quote the blog, which in turn is a quote from another article (talk about recursive quoting):

 

Dewar says in the interview that, " 'A lot of it is, ‘Let’s make this [computer science and programming] all more fun.’ You know, ‘Math is not fun, let’s reduce math requirements. Algorithms are not fun, let’s get rid of them. Ewww – graphic libraries, they’re fun. Let’s have people mess with libraries. And [forget] all this business about ‘command line’ – we’ll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun.' "

 

While the original article is directed particularly towards undergrad schooling, this is something I would subscribe to at a larger level. I find the CS courses I take extremely light and fluffy, comparatively easy – intellectually, with little or no emphasis on theory – and mostly just “application level” programming – i.e. basically a propagation of “go use some libraries to do some fancy stuff, the math or algorithms of which you don’t need to really understand” philosophy. To get some really heavy duty lifting these days, I have to look outside the department – the first recourse is to the ECE or IND_ENG depts – and if I want to step it up some more, then the MATH department (which gets way too hard for a soft, effete, CS guy like me). For e.g. I took a machine learning course offered by the CSE dept – which was nothing more than a perusal of the “standard algorithms” available and some applications thereof. Now, I’m auditing (that’s all that I dare do) on a Statistical Learning course offered by the STATS dept, which is at a whole other level of math and analysis – hypothesis testing, estimation theory, linear operator theory, functional operator theory, etc.  And this is just an introductory/overview course – it promises to get more rigorous next quarter!

 

And I think, this is not an attribute of CS education in the US alone – but of CS education in general. The other and more established engineering disciplines IMO require a larger amount of rigour and  drilling, to be good at. One reason could be because they deal with the real, physical world, and have to develop a deep appreciation of the laws of physics that govern what they do. Unlike CS – a virtual world, where anything goes (as long as it conforms to basic logic).

 

On an aside, the article states “Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience.” This, I agree with whole heartedly. Programming the Lisp meta-circular interpreter in CSE755 was the most joy I ever had with the OSU-CSE core curriculum (minus CSE725 – Theory of Computation).

Saturday, January 19, 2008

A Weaker Cheaper MRI

The IEEE Spectrum reports on the development of an MRI machine that operates at a meagre 46 microTeslas (almost the same strength as the earth’s magnetic field , and a hundred thousandth of the field strength of conventional MRI machines, which typically operate at ~1.5Teslas). The stated advantages of these machines are:

Because it needs fewer costly magnets, a weak­magnetic-field MRI machine might cost as little as US $100 000, compared with $1 million or more for a standard MRI system ... But perhaps the most exciting thing about low-field imagers is that they can also perform another imaging technique, magneto­encephalography (MEG), .... MEG measures the magnetic fields produced by brain activity and is used to study seizures. Putting the two imaging modes together could mean matching images of brain activity from MEG with images of brain structure from MRI, and it might make for more precise brain surgery.

Low-field MRI has other advantages, says John Clarke, a physicist at the University of California, Berkeley.... “I’m personally quite excited about the idea of imaging tumors” with low-field MRI, he says. The difference between cancerous and noncancerous tissue is subtle, particularly in breast and prostate tumors, and the high-field strengths used in conventional MRI can drown out the signal. But low-field MRI will be able to detect the differences, Clarke predicts. A low-field MRI might also allow for scans during surgical procedures such as biopsies, because the weaker magnetic field would not heat up or pull at the metal biopsy needle

Now this seems a really exciting development in MRI technology – that would MRIs a practical medical device, rather than the hi-tech hi-cost curiosities they are now. And more than just the points mentioned in this article, the reason I found this technology so alluring is the potential of developing low cost, easily portable and deployable machines that can be used in the small clinics that dot the world, rather than today’s power hungry behemoths that cost a fortune to build and operate and that are available to less than 10% of the world’s population.

 

Sunday, January 13, 2008

The Princeton Companion to Mathematics

I've been reading sample articles of this book from here
If it achieves its purpose, it's a must have...

Friday, January 11, 2008

MICCAI 2008

MICCAI 2008, the 11th International Conference on Medical Image Computing and Computer Assisted Intervention, will be held from September 6 to 9, 2008 in New York City, USA. MICCAI typically attracts over 600 world leading scientists, engineers and clinicians from a wide range of disciplines associated with medical imaging and computer assisted surgery.

Topics

Topics to be addressed at MICCAI 2008 include, but are not limited to:

  • General Medical Image Computing
  • Computer Assisted Interventional Systems and Robotics
  • Visualization and Interaction
  • General Biological and Neuroscience Image Computing
  • Computational Anatomy (statistics on anatomy)
  • Computational Physiology (virtual organs)
  • Innovative Clinical and Biological Applications

Important Dates

January 20, 2008 Tutorial and workshop proposals
February 10, 2008 Acceptance of tutorials and workshops
7 March 2008 Submission of full papers
May 14, 2008 Acceptance of papers
June 9, 2008 Camera ready copy for papers
September 6 - 10, 2008 Tutorials, Conference, Workshops

Submission of Papers

We invite electronic submissions for MICCAI 2008 (LNCS style, double blind review) of up to 8-page papers for oral or poster presentation. Papers will be reviewed by members of the program review committee and assessed for quality and best means of presentation. Besides advances in methodology, we would also like to encourage submission of papers that demonstrate clinical relevance, clinical applications, and validation studies.

Proposals for Tutorials and Workshops

Tutorials will be held on September 6 and/or 9, 2008 and will complement and enhance the scientific program of MICCAI 2008. The purpose of the tutorials is to provide educational material for training new professionals in the field including students, clinicians and new researchers.

Workshops will be held on September 6 and/or 9 2008 and will provide opportunity for discussing technical and application issues in depth. The purpose of the workshops is to provide a comprehensive forum on topics which will not be fully explored during the main conference.

Executive Committee

Leon Axel, New York University, USA (General Co-Chair)
Brian Davies, Imperial College, UK (General Co-Chair)
Dimitris N Metaxas, Rutgers University, USA (General Chair)

Thursday, January 10, 2008

Imaging in Systems Biology

Sean G. Megason1, Corresponding Author Contact Information, E-mail The Corresponding Author and Scott E. Fraser1, Corresponding Author Contact Information, E-mail The Corresponding Author

1Beckman Institute and Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA

Available online 6 September 2007.

Most systems biology approaches involve determining the structure of biological circuits using genomewide “-omic” analyses. Yet imaging offers the unique advantage of watching biological circuits function over time at single-cell resolution in the intact animal. Here, we discuss the power of integrating imaging tools with more conventional -omic approaches to analyze the biological circuits of microorganisms, plants, and animals.

(link)

Calendar