Jazz journalist Nate Chinen, who writes for the New York Times, the Village Voice, and JazzTimes, is joined by pianists Vijay Iyer and Craig Taborn, along with Institute Artist-in-Residence, for a conversation about improvisational jazz and performance.
We shall discuss new pseudorandom generators for regular read-once branching programs of small width. A branching program is regular if the in-degree of every vertex in it is (either 0 or) 2. For every width d and length n, the pseudorandom generator uses a seed of length O((log d + log log n + log(1/p)) log n) to produce n bits that cannot be distinguished from a uniformly random string by any regular width d length n read-once branching program, except with probability p > 0
In this survey I will present several extremal problems, and some solutions, concerning convex lattice polytopes.
A typical example is to determine the smallest area that a convex lattice polygon can have if it has exactly n vertices.
The mathematical problems arising from modern celestial mechanics, which originated with Isaac Newton’s Principia in 1687, have led to many mathematical theories. Poincaré (1854-1912) discovered that a system of several celestial bodies moving under Newton’s gravitational law shows chaotic dynamics. Earlier, Euler (1707–83) and Lagrange (1736–1813) found instances of stable motion; a spacecraft in the gravitational fields of the sun, earth, and the moon provides an interesting system of this kind. Helmut Hofer, Professor in the School of Mathematics, explains how these observations have led to the development of a geometry based on area rather than distance.
For four years now, we have been conducting "medium-scale" experiments in how human subjects behave in strategic and economic settings mediated by an underlying network structure. We have explored a wide range of networks inspired by generative models from the literature, and a diverse set of collective strategic problems, including biased voting, graph coloring, consensus, and networked trading. These experiments have yielded a wealth of both specific findings and emerging general themes about how populations of human subjects interact in strategic networks.
The basic problem of all pictures is grounded in their bipolar existence. They are created objects, but nonetheless present themselves as physical beings. This paradoxical double-structure is exemplified in the “ME FECIT” of numberless inscriptions. With its “EGO,” the pictorial work declares that it does not consist of artificially shaped dead material, but of a living form. Dramatizing this problem, Leonardo da Vinci created the formula that pictures “imprison” the audience.
In this lecture Horst Bredekamp follows a chain of examples from antiquity, the Middle Ages, early modernity, and the twentieth century in order to question the traditional concept of the relationship between the work of art and the beholder. conceptualizing the theory of picture-act, which tries to develop alternatives to traditional concepts of representation, illustration, and mimesis.
Collateralized Default Obligations (CDOs) and related financial derivatives have been at the center of the last financial crisis and subject of ongoing regulatory overhaul.
In this talk, I shall describe an ongoing project to develop a complexity theory for cryptographic (multi-party computations. Different kinds of cryptographic computations involve different constraints on how information is accessed. Our goal is to qualitatively -- and if possible, quantitatively -- characterize the "cryptographic complexity" (defined using appropriate notions of reductions) of these different modes of accessing information. Also, we explore the relationship between such cryptographic complexity and computational intractability.
We prove completeness results for certain class of functions which have implications for automatic proving of universally-composable security theorems for ideal and real functionalities composed of if-then-else programs with (uniform) random number generation and data objects from additive group of GF(2^m) . The theorems imply that, within this language framework, there is a decision procedure to find out if a real functionality realizes an ideal functionality, and this procedure is in computational time independent of m (which is essentially the security parameter).
The uniformity norms are defined in different contexts in order to distinguish the ``typical'' random functions, from the functions that contain certain structures. A typical random function has small uniformity norms, while a function with a non-negligible uniformity norm correlates with a very structured function, e.g. a rank one function, a low degree polynomial, a nil-sequence. Such facts are useful for proving the existence of sub-structures (e.g. arithmetic progressions) inside larger structures. They are also interesting from the perspective of computer science.