JOURNAL:
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing:
1. High-level dynamic programs have to be slow.
2. One must prototype in one language and then rewrite in another language for speed or deployment, and
3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts.
We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming.
Julia shows that one can have machine performance without sacrificing human convenience.
Author(s): Jeff Bezanson, Alan Edelman, Stefan Karpinski, Viral B. Shah
Publisher: arXiv.org
Year: 2014
Language: English
Pages: 37
Tags: julia language programming mathematical software abstraction high performance numerical computing
1 Scientific computing languages: The Julia innovation
1.1 Computing transcends communities
1.2 Julia architecture and language design philosophy
2 A taste of Julia
2.1 A brief tour
2.2 An invaluable tool for numerical integrity
2.3 The Julia community
3 Writing programs with and without types
3.1 The balance between human and the computer
3.2 Julia's recognizable types
3.3 User's own types are first class too
3.4 Vectorization: Key Strengths and Serious Weaknesses
3.5 Type inference rescues ``for loops" and so much more
4 Code selection: Run the right code at the right time
4.1 Multiple Dispatch
4.2 Code selection from bits to matrices
4.2.1 Summing Numbers: Floats and Ints
4.2.2 Summing Matrices: Dense and Sparse
4.3 The many levels of code selection
4.4 Is ``code selection" just traditional object oriented programming?
4.5 Quantifying the use of multiple dispatch
4.6 Case Study for Numerical Computing
4.6.1 Determinant: Simple Single Dispatch
4.6.2 A Symmetric Arrow Matrix Type
5 Leveraging language design for high performance libraries
5.1 Integer arithmetic
5.2 A powerful approach to linear algebra
5.2.1 Matrix factorizations
5.2.2 User-extensible wrappers for BLAS and LAPACK
5.3 High Performance Polynomials and Special Functions with Macros
5.4 Easy and flexible parallelism
5.5 Performance Recap
6 Conclusion and Acknowledgments