Past CNS Talks
Network Workbench Workshop
Weixia (Bonnie) Huang and the NWB Team
Abstract: This two hour workshop will present and demonstrate the Network Workbench (NWB) Tool, the Community Wiki, and the Cyberinfrastructure Shell developed in the NSF funded Network Workbench project.
—The NWB Tool is a network analysis, modeling, and visualization toolkit for physics, biomedical, and social science research. It is a standalone desktop application and can install and run on Windows, Linux x86 and Mac OSX. The tool currently provides easy access to about 40 algorithms and several sample datasets for the study of networks. The loading, processing, and saving of four basic file formats (GraphML, Pajek .net, XGMML and NWB) and an automatic conversion service among those formats are supported. Additional algorithms and data formats can be integrated into the NWB Tool using wizard driven templates thanks to the Cyberinfrastructure Shell (CIShell).
—CIShell is an open source, software framework for the integration and utilization of datasets, algorithms, tools, and computing resources. Although the CIShell and the NWB tools are developed in JAVA, algorithms developed in other programming languages such as FORTRAN, C, and C++ can be easily integrated.
—The Network Workbench Community Wiki is a place for users of the NWB tool, CIShell, and other CIShell-based programs to request, obtain, contribute, and share algorithms and datasets. The developer/user community can work together and create additional tools/services to meet both their own needs and the needs of their scientific communities at large. All algorithms and datasets that are available via the NWB tool have been well documented in the NWB Community Wiki.
The workshop will present the overall structure, implementation, as well as a demo for potential developers and users. We would like to acknowledge the NWB team members that made major contributions to the NWB tool and/or Community Wiki: Santo Fortunato, Katy Börner, Alex Vespignani, Soma Sanyal, Ramya Sabbineni, Vivek S. Thakre, Elisha Hardy, and Shashikant Penumarthy.
| 6:00 PM | Wells Library 001
Automated Customer Tracking and Behavior Recognition
Ray Burke and Alex Leykin
Abstract: The retail context has an impact on consumer behavior that goes beyond product assortment, pricing, and promotion issues. It affects the time consumers spend in the store, how they navigate through the aisles, and how they allocate their attention and money across departments and categories. Unfortunately, conventional research techniques provide limited insight into the dynamics of shopper behavior.
The presentation will discuss new computational approaches for determining the location, path, and behavior of customers in retail stores using video images collected from ceiling-mounted surveillance cameras. The tracking process involves several stages of analysis: (1) segmenting the moving foreground regions from the relatively static background image using a statistical model based on the codebook approach, (2) estimating the positions of shoppers in the camera view by using a vertical projection histogram, (3) converting these camera coordinates into the x/y locations of shoppers on a store floor plan using a model of the camera's viewpoint, (4) identifying and tracking individual shoppers across frames using a Bayesian particle-filter model, and (5) identifying groups of shoppers by clustering motion trajectories. The authors will discuss data and measurement issues associated with collecting accurate tracking information, classifying customers into shopper groups, analyzing patterns of shopper behavior, and differentiating between sales associates and consumers. Validation results and example applications will be presented.
Grid and Network Services for Storing, Annotating, and Searching Streaming Data
Abstract: The Social Informatics Data Grid is a new infrastructure designed to transform how social and behavioral scientists collect and annotate data, collaborate and share data, and analyze and mine large data repositories. An important goal of the project is to be compatible with existing databases and tools that support the sharing, storage and retrieval of archival data sets. It is built on web and grid services to enable transparent access to data and analysis resources from anywhere and to leverage new and emerging web-based technologies created by a large and growing community of developers around the world. At the heart of the SIDGrid design is a rich data model that captures notions of time, data streams, and semi-structured data attached to these streams to enable powerful manipulations of multimodal data spread across data resources. Through query and analysis services deployed against the data warehoused in the SIDGrid users can perform new classes of experiments. Shared data resources available from anywhere over the Web introduces new capabilities to the process of collection and analysis of data, collaborative annotation among them, without relinquishing control over sensitive data via an embedded security model. This project is still in the development phase and feedback from user communities is essential for determining which functions are most important and should be developed next.
| 6:00 PM | Wells Library 001
Conceptual Play Spaces: Designing Games for Learning
Sasha Barab and Adam Ingram-Goble
Abstract: In this presentation, we will discuss our framework for designing play spaces to support learning academic content. While commercial games do not focus on the learning of academic content, it is quite possible to design one that does. Reflecting on our four years of design experience around developing conceptual play spaces, we provide guidelines for educators to think through what it would mean to design a game for supporting learning. Conceptual play is a state of engagement that involves (a) projection into the role of a character who, (b) engaged in a partly fantastical problem context, (c) must apply conceptual understandings to make sense of and, ultimately, transform the context. Reflecting on our four years of design experiences centered on the development of conceptual play spaces, we provide designers with anchor points and examples for thinking through what it would mean to design a game for supporting learning. This discussion will be situated in the context of our Quest Atlantis project.
Quest Atlantis is a standards-based online 3-D learning environment that transports students to virtual places to teach a wide variety of subjects, such as language arts, mathematics, the sciences, geography/social studies, and the arts while building digital age competencies and fostering a disposition to improve the world (see www.QuestAtlantis.org). This program was created by Dr. Sasha Barab, the Barbara B. Jacobs Chair in Education & Technology, and other collaborators from various departments at Indiana University. Quest Atlantis has been developed with substantial funding from the National Science Foundation, John D. and Catherine T. MacArthur Foundation, and National Aeronautics and Space Administration. It is the leading example of a new game-based curriculum. Our goal here is to both communicate the potential value of conceptual play spaces, and to provide an illuminative set of cases such that others might draw out lessons as they build their own.
Evolution selects for complexity, but only when complexity is of evolutionary value
Abstract: There has been a long-standing debate as to whether there exists any kind of "arrow of complexity" due to the action of natural selection. Some early scientists and philosophers reasoned that there must be, based on the paleontological record. Some object to a potential anthropocentric chauvinism to the interpretation of complexity in these records. Others, notably McShea and Gould, suggest a distinction between "driven" and "passive" selection for complexity, where the former corresponds to an active, non-random process biased towards increasing complexity, while the latter corresponds to a random process of diffusion away from a lower bound of complexity. Attempts to distinguish between driven and passive selection in the fossil record have met with mixed results, providing evidence for both conclusions. I will describe an experiment using a computer model of an evolving ecology in which agent behaviors are driven by artificial neural networks, the architecture of which is the primary subject of either natural selection or a random diffusion process. An information theoretic measure of the complexity of the resulting neural dynamics allows an investigation of the distinction between driven and passive selection. The results of this study suggest a simple explanation for the variability in biological studies of evolutionary trajectories, which is that under certain conditions evolution does indeed select for complexity, in a driven fashion, faster than one would see with a purely random, diffusive process. But this is only the case when additional complexity confers an evolutionary advantage on the affected agents. Under other conditions, natural selection for "good enough" solutions can reduce complexity growth relative to that observed in a passive, randomly diffusing system, even when all other contributing factors are held constant. Thus evolutionary complexity is neither entirely driven nor entirely passive, in McShea's sense, but is unavoidably a blend of the two forces, depending in a reasonably intuitive fashion on the evolutionary value of incremental gains in complexity.
Land Use Decision-Making and Landscape Outcomes
Abstract: Historical trajectories of land cover change in developed countries have provided the basis for a theory of forest transition. To briefly summarize, Forest Transition Theory (FTT) suggests that nations experience dramatic deforestation during a frontier period of heavy resource use and this deforestation phase is eventually followed by a period of reforestation after some period of economic development. A considerable amount of research has focused on the drivers of deforestation but we have a less complete understanding of the diverse factors contributing to reforestation and the prospects for a transition from deforestation to reforestation in different economies. These forest cover trajectories are the result of interactions between social and ecological processes operating at multiple spatial and temporal scales and there are numerous methodological approaches that have been used to examine the complexity in these coupled social-ecological systems. This presentation summarizes findings to date from research examining the role of landuse decision-making in land cover change in the Midwest United States, Brazil and Laos. Results are presented from the integration of agent-based models of land cover change and empirical data drawn from social surveys and remotely sensed data (aerial photography and satellite imagery). Findings from spatially explicit experimental work are also discussed that address the role of landowner heterogeneity and how management activities from diverse local-level actors result in complex macro-scale outcomes.
The Way Things Go: Provenance, Semantic Networks, and Systems-Scale Science
Abstract: Like many other complex human endeavors, scientific work is a decentralized, heterogeneous activity spanning organizational, disciplinary, technical boundaries. As science begins to address large-scale systems, the growing complexity of scientific work processes requires new infrastructure for understanding and managing the production of knowledge from distributed observation, simulation, analysis, and discourse activities. Cyberenvironments extend existing science application capabilities to include the ability to record, analyze, and interpret provenance documentation describing the causal relationships between processes and artifacts in scientific work. Using provenance-enabled collaboration and analysis tools, scientists can efficiently assess, validate, reproduce, and refine experiments and results. Provenance documentation enriches the scientific research record, enabling significant results to be preserved along with much of the associated information necessary to correctly interpret them. NCSA's suite of prototype Cyberenvironment tools is based around the idea of semantic content networks and built around the World Wide Web Consortium's Resource Description Framework (RDF). RDF provides an application and domain-neutral way to represent metadata, and can thus be used to link domain-specific information with generic vocabularies for describing artifacts and work processes. NCSA's work in the Grid Provenance Challenge, for instance, has demonstrated the applicability of RDF to representing scientific workflow executions, enabling data products to be linked via RDF to descriptions of the complex processes that produced them. The emerging Open Provenance Model attempts to further abstract the notion of causal relationships in scientific and other work processes, allowing provenance-enabled tools to link independently-observed processes together form descriptions of larger-scale processes. The scientific research record can then be understood as a semantic network of causality and thus be linked with other relevant networks, such as social networks, to provide a comprehensive model of scientific work that can be applied to new communities to build powerful science Cyberenvironments that maximize the impact of collaborative, systems-scale scientific work.
Changing the rules of the game: experiments with humans and virtual agents
Abstract: Many resource problems can be classified as commons dilemmas, a dilemma between the interest of the individual and the interest of the group as a whole. During the last decades substantial progress has been made in understanding how people can avoid the tragedy of the commons. However, we lack good understanding how people change institutional arrangements over time in an effective way in an environment with dynamic resources. I will discuss the initial results of a project where we look at innovation of institutional arrangements in common pool resource management where we combine laboratory and field experiments with agent-based modeling. In laboratory experiments groups share resources in a dynamic spatially explicit virtual environment, while the pencil and paper field experiments in Colombia and Thailand include various types of resources (fishery, foresty and irrigation). Using the individual level data derived from the experiments we develop and test agent-based models to derive better understanding of the experimental data. We also use the agent-based models to explore the evolution of institutional rules in various contexts that we could not (yet) experiment with. Going back and forth between experiments with humans and virtual agents is a fruitful way to develop empirically-based agent-based models. I will discuss methodological challenges experienced in this project as well as initial results of the various models.
| 4:00 PM | Wells Library 001
Annual Open House
Faculty and Students
Abstract: Open your laptops and demo your software. Bring posters to introduce your research questions and results. So far, the following posters and demo's are planned:
—PathView InfoVis Tool developed by Scott Long, Sociology & Mike Boyles and Pooja, Advanced Visualization Lab
—Structural Analysis of Computer Vocabulary: How do Users Conceptualize the Computer? by Adity Mutsuddi Upoma, CS
—An Emergent Mosaic of Wikipedian Activity by Bruce W. Herr II, Todd M. Holloway, Katy Borner
—Flows of Information and Influence in Social Networks by Eliot R. Smith
—ThisStar: Declarative Visualization by Joseph Cottam and Andrew Lumsdaine
—Reading the Envelope -- Understanding Visual Similarity Matricies by Ben Martin, Joseph Cottam, Chris Mueller and Andrew Lumsdaine
—The Association of American Law Schools (AALS) Dataset: Visualizations, Informetrics and the History of a Discipline by Peter Hook
—Towards a Preservable Object: A New Model for Digital Preservation for Cyberinfrastructure by Stacy Kowalczyk
—113 Years of Physical Review by Bruce W. Herr II, Russell Duhon, Elisha F. Hardy, Katy Börner
—Network Workbench Tool by Bonnie Huang & NWB Team
—InfoVis Lab Management System by Kenneth Lee & Russell Duhon et al.
—Scholarly Database by Gavin LaRowe et al.
—Artist Net Visualization by Justin Donaldson.
—See InfoVis Lab and CI for NetSci Center Open House web page for many more demos.
Visual Similarity Matrices
Benjamin Martin, Joseph Cottam, and Chris Mueller
Abstract: Matrix representations of graphs provide a useful alternative and supplement to other graph drawing methods. In matrix representations matrix orderings take the place of graph layouts and have a critical impact on the usefulness of the resulting matrix. We consider some factors that may make one algorithm better than another, and examine some ordering algorithms in light of these factors. We also consider the problem of interpreting, from a qualitative perspective, matrix based representations produced by some of these algorithms. Finally, we present some on-going research regarding breadth-first search ordered matrices of small-world graphs based on quantitative analysis of the resulting ordered matrices.
A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro
John M. Beggs
Abstract: Multi-neuron firing patterns are often observed, yet are predicted to be rare by models that assume independent firing. To explain these correlated network states, two groups recently applied a second-order maximum entropy model that used only observed firing rates and pairwise interactions as parameters (Schneidman et al., 2006; Shlens et al., 2006). Interestingly, with these minimal assumptions they predicted 90-99% of network correlations. If generally applicable, this approach could vastly simplify analyses of complex networks. However, this initial work was done largely on retinal tissue, and its applicability to cortical circuits is unknown. This work also did not address the temporal evolution of correlated states. To investigate these issues, we applied the model to multielectrode data containing spontaneous spikes or local field potentials from cortical slices and cultures. The model worked slightly less well in cortex than in retina, accounting for 88% ± 7% (mean ± s.d.) of network correlations. In addition, in 8/13 preparations the observed sequences of correlated states were significantly longer than predicted by concatenating states from the model. This suggested that temporal dependencies are a common feature of cortical network activity, and should be considered in future models. We found a significant relationship between strong pairwise temporal correlations and observed sequence length, suggesting that pairwise temporal correlations may allow the model to be extended into the temporal domain. We conclude that while a second-order maximum entropy model successfully predicts correlated states in cortical networks, it should be extended to account for temporal correlations observed between states.
Inventing the Future
Daniel A. Reed
Abstract: Ten years — a geological epoch on the computing time scale. Looking back, a decade brought the web and consumer email, digital cameras and music, broadband networking, multifunction cell phones, WiFi, HDTV, telematics, multiplayer games, electronic commerce and computational science. It also brought spam, phishing, identity theft, software insecurity, outsourcing and globalization, information warfare and blurred work-life boundaries. What will a decade of technology advances bring in communications and collaboration, sensors and knowledge management, modeling and discovery, electronic commerce and digital entertainment, critical infrastructure management and security?
Prognostication is always fraught with challenges, especially when predicting the effects of exponential change. Aggressively inventing the future, based on perceived needs and opportunities, is far more valuable. As Daniel Burnham famously remarked, "Make no little plans, they have no power to fire men's spirits." In this presentation, we present some visions of a technology-enriched future, driven by emerging technologies and by national and international policies and competitive strategies. We also discuss their implications for university futures, in a rapidly changing world.
Metadata, Provenance, and Search in e-Science
Abstract: Computational science investigations carried out through cyberinfrastructure frameworks are capable of generating quantities of data far larger and more tightly related than previous hand-driven techniques. For this data to be useful in other applications within the domain science or across multiple science domains, or just be useful and accessible through time, it must be described by metadata. Both syntatical, or lower level metadata, and semantic, or higher level metadata are important for reconstruction. A data product's provenance or derivation history is key to ascertaining such attributes as a data products quality. We argue that the best time and place to gather the metadata and provenance is closest to the source of generation of a dataset because that is where the most knowledge is. In this talk we discuss metadata, provenance, and search in cyberinfrastructure-driven computational science. Most communication about data products, from our experience, in computational science, uses XML. We discuss a solution to metadata storage in which a metadata catalog standing separate from the storage system on which the products reside provides rich domain-friendly communication with other components of the cyberinfrastructure. We examine provenance collection for workflow systems and data streaming, and tie that both to missing data in data streams through Kalman Filters and data quality through a data quality model. Finally we discuss current efforts to integrate cyberinfrastructure-driven computational science and digital repositories through provenance.
| 6:00 PM | Wells Library 001
Wireless Router Insecurity: The Next Crimeware Epidemic
Abstract: The widespread adoption of home routers by the general public has added a new target for malware and crimeware authors. A router's ability to manipulate essentially all network traffic coming in to and out of a home, means that malware installed on these devices has the ability to launch powerful Man-In-The-Middle (MITM) attacks, a form of attack that has previously been largely ignored. Making matters worse, many homes have deployed wireless routers which are insecure if the attacker has geographic proximity to the router and can connect to it over its wireless channel. However, some have downplayed this risk by suggesting that attackers will be unwilling to spend the time and resources necessary, nor risk exposure to attack a large number of routers in this fashion. In this talk, we will consider the ability of malware to propagate from wireless router to wireless router over the wireless channel, infecting large urban areas where such routers are deployed relatively densely. We develop an SIR epidemiological model, and use it to simulate the spread of malware over major metropolitan centers in the US. Using hobbyist collected wardriving data from Wigle.net and our model, we show the potential for the infection of tens of thousands of routers in short periods of time is quite feasible. We consider simple prescriptive suggestions to minimize the likelihood that such attacks are ever performed. Next, we show a simple yet worrisome attacks that can easily and silently be performed from infected routers. We call this attack 'Trawler Phishing'. The attack generalizes a well understood failure of many web-sites to properly implement SSL, and allows attackers to harvest credentials from victims over a period of time, without the need to use spamming techniques or mimicked, but illegitimate web-sites, as in traditional phishing attacks, bypassing the most effective phishing prevention technologies. Further, it allows attackers to easily form data-portfolios on many victims, making collected data substantially more valuable. We consider prescriptive suggestions and countermeasure for this attack.
The work on epidemiological modeling is joint work with Hao Hu, Vittoria Colizza and Alex Vespignani. The work on trawler phishing is joint work Sid Stamm.
| 4:00 PM | Swain Hall West 238
Biocomplexity talk on "A Phase Transition in Cortical Slice Networks"
John M. Beggs
| 6:00 PM | Wells Library 001
Search in Mental Landscapes: Characterizing Trajectory in Abstract Human Problem-Solving
Abstract: Search processes characterize a great variety of human and animal behavior. Adaptive search behavior requires an appropriate modulation between exploration and exploitation. The evolution of adaptive search provides unique and interesting insights into human behavior and cognition. We are currently studying search processes in abstract working memory tasks, where individuals are asked to seek answers to problems in spaces that can be characterized as a network of interconnected solutions. Our tasks include anagram and mathematics tasks that ask individuals to find multiple solutions for a given problem. For example, find four letter words in the letter set BLNTAO, or find equations that satisfy "= 8" using the numbers 1,2,3,4,5. Solutions provided by individuals are then characterized as search on a network where edges represent transitions between solutions. We then test the observed networks against putative cognitive hypotheses, defining hypothetical networks based on different estimates of similarity between solutions (e.g., Levenshtein distance and bigram frequencies). In this talk, I will discuss the biological basis for search in human cognition and the progress of our recent research as it investigates trajectories of search in abstract cognitive spaces.
| 6:00 PM | Wells Library 001
Emergent properties of flock activity in brown-headed cowbirds (_Molothrus ater) using social networks
Abstract: The purpose of the study was to investigate the vocal and social behavioral measures that contribute to a cowbird's reproductive success. Brown-headed cowbirds are an intriguing species because of their parasitic nature; females lay eggs in other species' nests who raise the young cowbirds to independence. Evolutionists have considered cowbirds as the model species to have a "genetic safety net" that would require no learning for reproduction. Previous research by our lab, however, has demonstrated cowbirds need to learn both social and vocal behavior from older conspecifics in order to be reproductively successful. The present study uses social network techniques to understand individual reproductive success. We studied three flocks composed of juvenile females, adult females and adult males in three large semi-naturalistic aviaries. From March to May, we documented social and vocal behavior in each flock. In early May, the three aviaries were opened to form one large aviary allowing birds the opportunity to interact and mate with individuals from all three flocks. Reproductive measures were collected during both time periods and microsatellite genotyping was conducted to analyze parentage of fertile eggs. Social networks analyses were used to link the reproductive success with social measures.
The power of a good idea: Predictive population dynamics of scientific discovery and information diffusion
Luis M. A. Bettencourt
Abstract: We study quantitatively several examples of scientific discovery by tracking temporal growth in numbers of publications and authors in specific scientific fields in physics, biology, medicine and material science in the aftermath of the publication of breakthrough concepts. We show that in every case the evolution of scientific literatures is well described by population models adapted from biology but with key differences that reflect specific aspects of the social dynamics of scientific interaction. We construct associated measures of scientific productivity, by quantifying changes in numbers of active authors to output in terms of publications. These methods give an integrated concise and predictive description of epidemics of scientific knowledge and their general characterization in terms of quantities analogous to similar contagious processes in biology, and by scaling laws as fields grow. We also show how these quantitative dynamics provide the means to design optimal funding intervention policies that maximize scientific output as fields unfold.
| 6:00 PM | Wells Library 001
Dynamics of tremor networks in Parkinson's disease
Abstract: Tremor (an involuntary, rhythmic oscillatory movement of one functional body region) is one of the cardinal motor symptoms of Parkinson's disease and is believed to be generated in the basal ganglia - thalamocortical circuits, which are involved in the control of motor programs. The study of the tremor-related activity in different parts of motor control networks (basal ganglia and muscles) in Parkinsonian patients reveals a complex spatiotemporal pattern of synchronous oscillatory activity. The study of the short-interval phase synchronization in these networks indicates that the synchrony is highly intermittent in time and follows certain patterns of spatial organization. These observations are used to develop hypotheses on the functional structure of the basal ganglia motor control networks.
| 6:00 PM | Wells Library 001
Photons and visual information processing: Signal, noise and optimality in blowfly motion perception
Rob de Ruyter van Steveninck
Abstract: Visual information processing begins in the retina, where light is converted into electrical signals. Those signals are used by the brain to extract features useful in guiding action. An example of this is the estimation of visual motion, which is very important in animal navigation. A fundamental constraint in this process comes from the
physics of light. Light is absorbed in packets, called photons, which arrive at random in time. The visual input signal therefore contains an irreducible noise component, which will affect any computations performed by the brain.
In our group we are interested in how the statistics of visual signal and noise affect the computation of motion. I will present two approaches to this problem:
- Concurrent sampling of natural visual signals and motion, which allows us to derive the computational form of the optimal motion estimator.
- Recording from blowfly motion sensitive neurons. This tells us how a relatively small, but visually sophisticated, animal is affected by visual signal statistics.
The first approach leads to the somewhat surprising result that, in order to be optimal, the estimation of velocity must be biased at low contrasts. The neural recordings show that the fly exhibits a very similar bias, suggesting that its brain implements a form of optimal processing. Some of the more general implications of this result will be discussed.
Analysing research fields within Physics using network science
Abstract: The Physics and Astronomy Classification Scheme (PACS) had been introduced by the American Institute of Physics (AIP) in 1975 to identify fields and sub-fields of physics. Each document published by the AIP has one or more of these PACS numbers. Lately other databases and online websites are using this classification scheme to group articles and authors in different sub-fields of physics and assigning these numbers to articles published in journals other than the AIP journals. Since an article is assigned more than one PACS number, we analyse the co-occurence of PACS numbers over a period of 20 years, from 1985 to 2005. The network of PACS co-occurences is an extremely dense network which exhibits small world properties. It consists of one big giant component with PACS numbers in the general category exhibiting the highest betweeness centrality. We also use various clustering techniques to study the clusters of PACS numbers for each year. The clusters formed strongly overlap with each other and we use the CFinder software to identify the overlapping clusters. Though the major communities remain the same, we are able to identify sub-communities within these which change over time. We also uncover unexpected connections between very different communities.
Network Structure of Protein Folding Pathways
Abstract: Packing problems, atomic clusters, polymers, and the ultimate building blocks of life, proteins, all live in high-dimensional conformation spaces littered with forbidden regions induced by self-avoidance. The classical approach to protein folding inspired by statistical mechanics avoids this high dimensional structure of the conformation space by using effective coordinates. Here we introduce a network approach to capture the statistical properties of the structure of conformation spaces, and reveal the correlations induced in the energy landscape by the self-avoidance property of a polypeptide chain. We show that the folding pathways along energy gradients organize themselves into scale free networks, thus explaining previous observations made via Molecular Dynamics (MD) simulations. We also show that these energy landscape correlations are essential for recovering the observed connectivity exponent, which belongs to a different universality class than that of random energy models. We further corroborate our results by MD simulations on a 20-monomer AK peptide.
Eliot R. Smith
Abstract: Social psychologists have studied the psychological processes involved in persuasion, conformity, and other ways people influence each other, but have rarely modeled the ways influence processes play out when multiple sources and multiple targets of influence interact over time. At the same time, workers in other fields ranging from sociology and marketing to cognitive science and physics have recognized the importance of social influence and have developed models of influence flow in groups and populations. This talk reviews models of social influence drawn from a number of fields, categorizing them using four conceptual dimensions: (a) assumed patterns of network connectivity among people, (b) assumptions of discrete behaviors versus continuous attitudes, (c) whether nonsocial as well as social influences are assumed, and (d) whether social influence is assumed to always produce assimilation, or whether contrast (moving away from the source of influence) is also possible. This set of four dimensions delineates the universe of possible models of social influence. The detailed, micro-level understanding of influence processes derived from focused laboratory studies should be contextualized in ways that recognize how multidirectional, dynamic influences are situated in people's social networks and relationships.
| 6:00 PM | Wells Library 001
Social Networks of Characters in Dreams
Abstract: Social interactions are more frequent in dreams than in waking life. This suggests that dream reports might be a useful source ofinformation about social networks; then again, given the bizarre events that sometimes occur in dreams, dream reports may contain nothing systematic. For three individuals, dream reports were coded for the characters occurring in each dream. An affiliation network was constructed for each dreamer, by considering each character as a vertex, and joining two characters with an edge if they were present in a dream together. Two of the resulting social networks have the small world properties of short average path lengths and high clustering (i.e., transitivity). The network for one dreamer is different, with lower clustering than the others. The number of characters present in at least one dream with a certain character is called the degree of the character. The distribution of degrees in the dream social networks follows Zipf's Law, as often occurs in waking social networks. During dreaming, there is little input from the senses, so it is proposed that properties of a dream social network must arise from corresponding properties of the representation of people in the dreamer'smemory. However, a dream social network is not a carbon copy of the dreamer's waking social network; for example, dream networks include celebrities. Dream reports are a source of extra information about an individual's social network, systematic but somehow transformed to reflect
what the dreamer is concerned about.
| 6:00 PM | Wells Library 001
Solving the proteomics using global non-linear modeling
Abstract: Imagine if, on moving into a new house, you were given a box full of electrical components, rather than the shiny appliances you were hoping for. Hoping to find some assembly instructions, instead you only have some pictures of the final appliances. Your task is to assemble them: an almost impossible mission. For a start, we would not even know what 90% of the components do, let alone which appliance they are for. This is where we find ourselves in biology at the moment. Amazing advances have been made in identifying the basic building blocks of life: proteins. The sequencing of genomes of multiple different species has produced an almost exhaustive catalogue of the proteins. This is a great wealth of data. However, only a few of these proteins are understood, and their interactions even less so: there are no instructions in the box. Proteins form the basis of most biological structures, and of the biochemical reactions we understand as life. The key challenge in this new century is to understand how these components (the proteins) work together. We call this study proteomics.
The experimentalists react proteins together and use sophisticated techniques to measure these reactions as they progress with time. The idea is to identify which of these reactions are the ones that occur in nature. There is a vast amount of data being produced by these experiments; our task is to automate the intelligent processing of it. This problem is important and complex, and many groups are attempting solutions.
We have developed a new global nonlinear modeling technique, which considers every possible permutation of reaction possible, and then remove the least likely one by one. At the end, we hope to have found the mechanism that occurs in real life. So far, we have successfully predicted the chemical reaction steps of the sugar metabolism of a bacterium.
Measuring biological data is not an exact science, and the complex non-linear behavior means that our selection process will always be prone to error. To improve our accuracy, we have been considering how best to handle the noisy data. We suggest alternative experiments, and explain to experimentalists where a small improvement in accuracy will yield disproportionately large benefits.
| 6:00 PM | Wells Library 001
Socially Guided Machine Learning
Abstract: Socially Guided Machine Learning is a research paradigm concerning computational systems that participate in social learning interactions with everyda y people in human environments. This approach asks, How can systems be designed to take better advantage of learning from a human partner and the ways that everyday people approach the task of teaching? Thus, gaining a deeper understanding of human teaching and learning through Machine Learning. In this talk I describe two novel social learning systems, on robotic and computer gameplatforms. These systems ask important questions about how to design learning machines, and results from these systems contribute significant insights into the ways that learners leverage social interaction.
Sophie is a virtual robot that learns from human players in a video game via interactive Reinforcement Learning. A series of experiments with this platform uncovered and explored three principles of Social Machine Learning: guidance, transparency, and asymmetry. For example, when the the algorithm and interface are modified to use transparency behaviors and attention direction, this has a significant positive impact on the human partner's ability to teach the agent: a 50% decrease in actions needed to learn a task, and a 40% decrease in task failures during training. On the Leonardo social robot, I describe my work enabling Leo to participate in social learning interactions with a human partner. Examples include learning new tasks in a tutelage paradigm, learning via guided exploration, and learning object appraisals through social referencing. An experiment with human subjects shows that Leo's social mechanisms significantly reduced teaching time by aiding in error detection and correction.
| 6:00 PM | Wells Library 001
Social Network and Genre Emergence in Amateur Flash Multimedia
John Paolilo, Jonathan Warren and Breanne Kunz
Abstract: Research on digital media tends to characterize the emergence of new genres without reference to social networks, even though community and social interaction are invoked. In this talk, we examine Flash animations posted to Newgrounds.com, a major web portal for amateur Flash, from a social network perspective. Results indicate that participants social network positions are strongly associated with the genres of Flash they produce. We argue from these findings that the social networks of Flash authors contribute to the establishment of genre norms, and that a social network approach can be crucial to understanding genre emergence.
Social Web Search (Part 2)
Abstract: This talk will present two research projects under way in the Network and agents Network (NaN), which study ways of leveraging online social behavior for better Web search. GiveALink.org is a social bookmarking site where users donate their personal bookmarks. A search and recommendation engine is built from a similarity network derived from the hierarchical structure of bookmarks, aggregated across users. 6S is a distributed Web search engine based on an ad aptive peer network. By learning about each other, peers can route queries through the network to efficiently reach knowledgeable nodes. The resulting peer network structures itself as a small world that uncovers semantic communities and outperforms centralized search engines.
Data & Tools
News and Events