Past CNS Talks
Weixia (Bonnie) Huang, Bruce Herr, and Ben Makines
Abstract: This talk will present and demonstrate the Network Workbench (NWB) Tool, the Community Wiki, and the Cyberinfrastructure Shell developed in the NSF funded Network Workbench project.
—The NWB Tool is a network analysis, modeling, and visualization toolkit for physics, biomedical, and social science research. It is a standalone desktop application and can install and run on Windows, Linux x86 and (coming soon) Mac OSX. The tool currently provides easy access to about 40 algorithms and several sample datasets for the study of networks. The loading, processing, and saving of four basic file formats (GraphML, Pajek .net, XGMML and NWB) and an automatic conversion service among those formats are supported. Additional algorithms and data formats can be integrated into the NWB Tool using wizard driven templates thanks to the Cyberinfrastructure Shell (CIShell).
—CIShell is an open source, software framework for the integration and utilization of datasets, algorithms, tools, and computing resources. Although the CIShell and the NWB tools are developed in JAVA, algorithms developed in other programming languages such as FORTRAN, C, and C++ can be easily integrated.
—The Network Workbench Community Wiki is a place for users of the NWB tool, CIShell, and other CIShell-based programs to request, obtain, contribute, and share algorithms and datasets. The developer/user community can work together and create additional tools/services to meet both their own needs and the needs of their scientific communities at large. All algorithms and datasets that are available via the NWB tool have been well documented in the NWB Community Wiki.
The talk will present the overall structure, implementation, as well as a demo for potential developers and users.
We would like to acknowledge the NWB team members that made major contributions to the NWB tool and/or Community Wiki: Santo Fortunato, Katy Börner, Alex Vespignani, Soma Sanyal, Ramya Sabbineni, Vivek S. Thakre, Elisha Hardy, and Shashikant Penumarthy.
Scientometrics in the Open Access Era
Abstract: The "Open Access (OA) Advantage" in citations consists of: Early Advantage (early self-archiving produces both earlier and more citations), Usage Advantage (more downloads for OA articles, correlated with later citations), Competitive Advantage (relative citation advantage of OA over non-OA articles: disappears at 100% OA), Quality Advantage (OA advantage is higher, the higher the quality of the article) and Quality Bias (authors selectively self-archiving their higher quality articles - a non-causal component: disappears at 100% OA). We are currently comparing the OA advantage for mandated and spontaneous (self-selected) self-archiving. The growing webwide database of Open Access (OA) articles, the proposed US Federal Research Public Access Act (FRPAA) and the UK Research Assessment Exercise's recent transition to metrics will make it possible to: (1) motivate more researchers to provide OA by self-archiving; (2) map the growth of OA across disciplines, countries and languages; (3) navigate the OA literature using citation-linking and impact ranking; (4) measure, extrapolate and predict the research impact of individuals, groups, institutions, disciplines, languages and countries; (5) measure research performance and productivity, (6) assess candidates for research funding; (7) assess the outcome of research funding, (8) map the course of prior research lines, in terms of individuals, institutions, journals, fields, nations; (9) analyze and predict the direction of current and future research trajectories; and (10) provide teaching and learning resources that guide students (via impact navigation) through the large and growing OA research literature in a way that navigating the web via google alone cannot come close to doing.
Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chandos. "The Open Access (OA) Advantage" in citations consists of: Early Advantage (early self-archiving produces both earlier and more citations), Usage Advantage (more downloads for OA articles, correlated with later citations), Competitive Advantage (relative citation advantage of OA over non-OA articles: disappears at 100% OA), Quality Advantage (OA advantage is higher, the higher the quality of the article) and Quality Bias (authors selectively self-archiving their higher quality articles - a non-causal component: disappears at 100% OA). We are currently comparing the OA advantage for mandated and spontaneous (self-selected) self-archiving. The growing webwide database of Open Access (OA) articles, the proposed US Federal Research Public Access Act (FRPAA) and the UK Research Assessment Exercise's recent transition to metrics will make it possible to: (1) motivate more researchers to provide OA by self-archiving; (2) map the growth of OA across disciplines, countries and languages; (3) navigate the OA literature using citation-linking and impact ranking; (4) measure, extrapolate and predict the research impact of individuals, groups, institutions, disciplines, languages and countries; (5) measure research performance and productivity, (6) assess candidates for research funding; (7) assess the outcome of research funding, (8) map the course of prior research lines, in terms of individuals, institutions, journals, fields, nations; (9) analyze and predict the direction of current and future research trajectories; and (10) provide teaching and learning resources that guide students (via impact navigation) through the large and growing OA research literature in a way that navigating the web via google alone cannot come close to doing. Shadbolt, N., Brody, T., Carr, L. and Harnad, S. (2006) The Open Research Web: A Preview of the Optimal and the Inevitable, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chandos. http://eprints.ecs.soton.ac.uk/12453/ Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. http://eprints.ecs.soton.ac.uk/11160/ " target=blank>http://eprints.ecs.soton.ac.uk/12453/
Berners-Lee, T., De Roure, D., Harnad, S. and Shadbolt, N. (2005) Journal publishing and author self-archiving: Peaceful Co-Existence and Fruitful Collaboration. http://eprints.ecs.soton.ac.uk/11160/
The Particle Swarm: Theme and Variations on Computational Social Learning
Abstract: The particle swarm algorithm is implemented in computer programs that solve hard problems by simulating social processes. Like human beings, "particles" in a population interact, sharing their successes, and over time the entire population settles on optimal patterns of parameters. The performance of the algorithm depends on a number of things, including population size and communication structure, the nature of the rules for interactions among particles, the method by which they are propelled, and the values of coefficients that are used to control convergence and explosion. As the paradigm has evolved since the first papers were presented in 1995, the basic particle swarm has become both more effective and more concise. In this talk I will discuss the philosophy and history of the method, compare some basic versions, discuss issues in implementation, and present some important topics for future research.
Frictionless Brains: Evolution and Analysis of Brain-Body-Environment Systems
Abstract: Unraveling the neural basis of behavior is a daunting task. Beyond the obvious experimental difficulties, there are significant theoretical challenges that are typical of all biological systems. These challenges include (1) the dynamical complexity and dense interconnectivity of the underlying elements, (2) the often counterintuitive designs produced by evolution, and (3) the fact that nervous systems co-evolved with the bodies and environments in which they are embedded, and can only really be understood within this larger context. One approach to these difficulties is the careful study of idealized models of complete brain-body-environment systems. Like Galileo's frictionless planes, such frictionless brains (and bodies, and environments) can help us to build intuition and, ultimately, the conceptual framework and mathematical and computational tools necessary for understanding the mechanisms of behavior.
In this talk, I will provide a broad overview of a systematic attempt to engage these issues through the evolution and analysis of dynamical "nervous systems" for model agents. Along the way, I will briefly survey a variety of projects, including the general dynamical behavior of recurrent neural circuits, the impact of circuit architecture on dynamics, the structure of fitness space and its influence on evolutionary processes, the interaction between neural and peripheral dynamics in evolved model pattern generators, the interplay of developmental bias and selection during evolution, and the evolution and analysis of learning and such minimally cognitive behaviors as categorical perception, short-term memory and selective attention.
Social Web Search
Abstract: This talk will present two research projects under way in the Network and agents Network (NaN), which study ways of leveraging online social behavior for better Web search. GiveALink.org is a social bookmarking site where users donate their personal bookmarks. A search and recommendation engine is built from a similarity network derived from the hierarchical structure of bookmarks, aggregated across users. 6S is a distributed Web search engine based on an adaptive peer network. By learning about each other, peers can route queries through the network to efficiently reach knowledgeable nodes. The resulting peer network structures itself as a small world that uncovers semantic communities and outperforms centralized search engines.
The Fun Revolution: How the New Science of Videogames Will Transform the Real World
Abstract: What if we could all live in a fantasy game instead of the real world? That's not just a philosophical question any more. Though living in a fantasy, the gamers seem happy enough. And if they're happy, maybe others would be happier there as well. Maybe millions and millions of others. Indeed, given the choice between a fantasy world designed to be completely fun all the time, and the real world with its myriad problems, how many would choose reality? Very few, and in all likelihood, not enough to allow daily life in the real world to continue unchanged.
The Fun Revolution uses hard-headed economic and social analysis to reveal how video games, toys no longer, will force reality to become more fun.
Faculty and Students
Abstract: Open your laptops and demo your software. Bring posters to introduce your research questions and results. So far, the following posters and demo's are planned:
—Heather Roinestad, Ben Markines, Mira Stoilova, Todd Holloway, Filippo Menczer, and Mike Conover present "Building an Internet Search Engine from your Bookmark Files" as poster and demo.
—Ruj Akavipat, Le-Shin Wu, Ana G. Maguitman, Filippo Menczer present "6S: Distributing Crawling and Searching Across Web Peers" as poster/software demo.
—Peter Hook presents "Ideological Alliances on the United States Supreme Court: Visualizing Co-Voting Data" as poster
—Soma Sanyal presents "Effect of citation patterns on network structure" as poster.
—Katy Borner & Julie Smith present "Places & Spaces: Mapping Science Exhibit" as poster
—Soma Sanyal, Santo Fortunato, Bruce Herr, Elisha Hardy, Weixia (Bonnie) Huang & Katy Borner present "NWB Community Portal" as demo
—Weixia (Bonnie) Huang, Santo Fortunato, Ben Markines, Bruce Herr, Soma Sonyal, Ramya Sabbineni, Vivek S. Thakres, Elisha Hardy, Shashikant Penumarthy & Katy Borner present "NWB Tool and Java-based Dataset, Algorithm, and Executable Integration Using Templates" as demo
—Bruce Herr, Weixia (Bonnie) Huang & Ben Markines present"Cyberinfrastructure Shell (CIShell)" as demo
—Gavin LaRowe and Sumeet Ambre present "The Scholarly Database" as poster and demo
—Bruce Herr, Weimao Ke, Elisha Hardy & Katy Borner present "Movies and Actors" as poster
—Justin Donaldson presents "Music Recommendation Mapping" as demo
—Stacy Kowalczyk presents "Digital Preservation by Design" as poster
—Shashikant Penumarthy presents " Virtual World Toolkit (VWTk) 0.8b" as screencast
—Jon Hobbs, Jodi Smith, Hema Patel, AoNan Tang, Wei Chen & John Beggs present "Electrophysiological analysis of human epileptogenic tissue" as poster
—Vittoria Colizza & Alessandro Vespignani present "INFO I590 Pandemics - Introduction to Computational Epidemiology" as poster
—Sumeet Ambre & John Burgoon present "Mapping Indiana's Intellectual Space" as demo
Agent-Based Model of Genotype Editing
Abstract: Evolutionary algorithms rarely deal with ontogenetic, non-inherited alteration of genetic information because they are based on a direct genotype-phenotype mapping. In contrast, in Nature several processes have been discovered which alter genetic information encoded in DNA before it is translated into amino-acid chains. Ontogenetically altered genetic information is not inherited but extensively used in regulation and development of phenotypes, giving organisms the ability to, in a sense, re-program their genotypes according to environmental clues. An example of post-transcriptional alteration of gene-encoding sequences is the process of RNA Editing. Here we introduce a novel Agent-based model of genotype editing and a computational study of its evolutionary performance in static and dynamic environments. This model builds on our previous Genetic Algorithm with Edition, but presents a fundamentally novel architecture in which coding and non-coding genetic components are allowed to coevolve. Our goal is twofold: (1) to study the role of RNA Editing regulation in the evolutionary process, and (2) to investigate the conditions under which genotype edition improves the optimization performance of evolutionary algorithms. We show that genotype edition allows evolving agents to perform better in several classes of fitness functions, both in static and dynamic environments. We also present evidence that the indirect genotype/phenotype mapping resulting from genotype editing leads to a better exploration/exploitation compromise of the search process. Therefore, we show that our biologically-inspired model of genotype edition can be used to both facilitate understanding of the evolutionary role of RNA regulation based on genotype editing in biology, and advance the current state of research in Evolutionary Computation.
The Other Ride of Paul Revere: Brokerage Role in the Making of the American Revolution
Abstract: The celebrated tale of his "Midnight Ride" notwithstanding, Paul Revere's role in the events leading up to the American Revolution remains rather obscure. Joseph Warren, known as the man who sent Revere on that ride, presents a similar quandary. What was the nature of the roles they played in the mobilization process? I address the question from a social structural perspective, reassessing the evidence and reconsidering the key concept of brokerage. The analysis shows that they were bridges par excellence, spanning the various social chasms and connecting disparate organizational elements of the movement, thus, bringing together "men of all orders" to forge an emerging movement. Shin-Kap Han (Ph.D., Columbia University) is Associate Professor of Sociology at the University of Illinois at Urbana-Champaign. His areas of interest are Social Networks, Economic Sociology, Organizations and Institutions, Korean Society (Historical/Contemporary), Careers, and Quantitative Methods. He is currently working on, among others, Korean Chaebol ("Family Business: The Marriage Network of Chaebol Families in Korea") and large scale social movement and networks ("To Harness an Outbreak: A Microstructural Account of Mobilization for the March First Movement").
Designing Multi-sensory Displays of Abstract Data - with Stock Market Trading Examples
Keith V. Nesbitt
Abstract: This talk will describe a general approach for designing multi-sensory (visual, auditory and touch) displays of abstract data. One aim of designing such displays is to create tools that help people understand large amounts of data and find useful patterns in the data. This activity can be described as "Perceptual Data Mining".
While the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is complex and must consider a broad range of human perceptual capabilities and also account for sensory interactions.
This talk will discuss a number of relevant design issues, including; the multi-sensory design space, the design process, using design guidelines and how to evaluate designs. The concepts will be described in the context of a real world case study that aims to find useful trading patterns in stock market data.
Record Linkage: Concepts and Techniques
Abstract: Poor quality data is prevalent in databases due to a variety of reasons, including transcription errors, lack of standards for recording database fields, etc. To be able to effectively query and integrate such data, a key problem is to efficiently identify pairs of entities (represented as individual records, e.g., persons, or groups of records, e.g., households) in two sets of entities that are approximately the same. This operation has been studied through the years and it is known under various names, including record linkage, entity identification, and approximate join, to name a few. The objective of this talk is to provide an overview of key research results and algorithmic techniques used in this area.
Uncovering functional networks in Internet Traffic
Abstract: he Internet is a complex system in which hundreds of millions of users form transient social networks as they communicate using thousands of applications. In some cases these applications are well-known--email and the Web, for example--and identifiable through their use of publicly advertised ports. In other cases, users conceal their interactions by using nonstandard ports, covert channels, and encryption. Law enforcement officials and network administrators have little power to detect these hidden networks as they attempt to curb illegal file sharing and other criminal activities online. We present a simple technique to detect functional subnetworks based purely on their topological features. User privacy is safeguarded as there is no need to inspect packet contents or track individual Internet addresses. A test involving traffic data collected in a typical day on the Internet2 backbone, involving 15 million distinct hosts, shows that our technique can accurately cluster applications into functional categories. A collection of unknown applications are correctly identified with this method, as confirmed by further analysis.
Advances in Relationship Marketing Thought and Practice: The Influence of Social Network Theory
Abstract: Social network theory was developed to help conceptualize the complexities social relations, and modern marketing strategies focus on the complexities of managing relationships with customers. During this talk, I review three dominant perspectives of social network theory that marketing scholars have applied to advance relationship marketing thought and practice. As part of this review, I summarize key findings from the past 25 years of marketing literature that incorporates social network theory and/or analysis. I conclude by presenting recent trends that suggest that social network theory will become increasingly relevant and important to marketing researchers and practitioners that operate in an interactive marketing environment.
Revisiting the Role of Trust and Communication in Globally-Distributed Teams: A Social Network Analysis Perspective
Manju K. Ahuja
Abstract: Few would disagree that trust is one of the key themes in organizational/behavioral research today. McEvily, Perrone, and Zaheer (2003, p. 1), for example, contend that while "trust has long figured prominently in scholarly and lay discourse alike;" it is only recently that organizational researchers have started devoting substantial attention to understanding the significance of trust. This is due to two simultaneous developments related an emphasis on collaboration, and changes in technology "that have reconfigured exchange and the coordination of work across distance and time." In this study, we tested three proposed models (additive, moderation, and mediation) to determine the role of trust in its relationship with communication and performance. Using the SNA perspespective, we conceptualize trustworthiness and communication in terms of centrality with respect to these factors. Our results indicate that the mediating model best explains the role of trust centrality but considering all three models presents a more complex picture. The strong support for the mediation model indicates that trust centrality generally acts as a mediator between communication centrality and performance. That is, the path through which communication leads to performance is through trust. The moderation model adds some nuances to the above general finding. It suggests that for trustworthy individuals, communication can enhance their performance. But, for those who are perceived as less trustworthy, high levels of communication can backfire. Their communication is perceived can be a source of annoyance, and unproductive use of the recipient's time.
Abstract: Measuring distance or some other form of proximity between objects is a standard data mining tool. Connection subgraphs were recently proposed as a way to demonstrate proximity between nodes in networks. We propose a new way of measuring and extracting proximity in networks called "cycle free effective conductance'' (CFEC). Our proximity measure can handle more than two endpoints, directed edges, is statistically well-behaved,
and produces an effectiveness score for the computed subgraphs. We provide an efficient algorithm. Also, we report experimental results and show examples from several collaboration and communication networks. The proposed method usually produces results that are readily visualized.
A Simple Approach to Species' Lifetime Distribution in Ecology
Abstract: Since the seminal work of Lotka and Volterra, Ecology has offered the inspiration to several unsophisticated yet insightful approaches that found thereafter a ready application to the more general field of Complex Systems. Strong of this excuse, we address with a zero-th order evolutionary model the issue of taxa's lifetime distribution. Altough the model is simple enough to be exactly solvable and makes no specific assumptions on the pattern of interaction between species, it offers a natural explanation to several, apparently conflicting, empirical data collections.
Local cortical networks: Functional topology and dynamics
Aonan Tang and John Beggs
Abstract: The average cortical neuron makes and receives about 1,000 synaptic contacts. This anatomical information suggests that local cortical networks are connected in a fairly democratic manner, with all nodes having about the same degree. But the physical connections found in the brain do not necessarily reveal how information flows through the network. Here we will describe our ongoing work to uncover functional connectivity from living networks of cortical neurons in vitro. We use both acute cortical slices and cortical slice cultures which can be kept alive for periods of about 10 hrs. Our first experiments with 60-channel microelectrode arrays did not allow us to get a clear picture of functional network topology. Our more recent work with a 512 electrode array system (in collaboration with Alan Litke of UC Santa Cruz) has allowed us to overcome many of these initial difficulties. We have also made improvements in the way we measure information transfer between recording sites. We will present these new results and discuss the implications they have for cortical information processing.
Egalitarian Search Engines
Abstract: Search engines have become key media for our scientific, economic, and social activities by enabling people to access information on the Web in spite of its size and complexity. On the down side, search engines bias the traffic of users according to their page-ranking strategies, and some have argued that they create a vicious cycle that amplifies the dominance of established and already popular sites. We show that, contrary to these prior claims and our own intuition, the use of search engines actually has an egalitarian effect. We show that the search behavior by users mitigates the attraction of popular pages, directing more traffic toward less popular sites.
Delineating Social Institutions From Semantic Networks of Role-Identities
Abstract: Dictionary definitions provide an accessible and commonsense body of data describing the cultural understandings that individuals have about role-identities. This research analyzes cross-references between definitions of several hundred identities to see whether social institutions can be viewed as confluences of identity meanings. I created a zero-one adjacency matrix by linking identities to the concepts given in their definitions. I then computed boolean powers of the adjacency matrix to simulate the process of looking up words that definitions contain. Principal components analysis of the result organized identities into clusters corresponding to standard social institutions, like family, education, medicine, work, law, religion. The analysis sub-divided some standard institutions in interesting ways, and additionally it identified sexuality as an incipient social institution.
Statistical Analysis for Network Science
Abstract: This talk highlights the wide range of statistical analyses that are part of network science. Of particular importance are the exponential family of random graph distributions, known as p*, and recent work on robustness and resistance of network data when actors and/or relational ties are missing or removed.
Evolution of Neural Complexity
Larry Yaeger and Olaf Sporns
Abstract: We analyze evolutionary trends in artificial neural dynamics and network architectures specified by haploid genomes in the Polyworld computational ecology. We discover consistent trends in neural connection densities, synaptic weights and learning rates, entropy, mutual information, and an information-theoretic measure of complexity. In particular, we observe a consistent trend towards greater structural elaboration and adaptability, with a concomitant and statistically significant growth in neural complexity.
The Propagation of Innovations in a Social Network
Abstract: We have developed an internet-based experimental platform (for examples, see http://groups.psych.indiana.edu) that allows groups of 2-200 people to interact with each other in real time on networked computers. I will describe experiments using this platform that explore how people attempt to solve simple problems while taking advantage of the developing solutions of other people in their social network. Over 15 rounds of problem solving, participants received feedback not only on the success of their own solutions to a simple search problem, but also on their neighbors¹ solutions and outcomes. Neighbors were determined by one of four network topologies: locally connected lattice, random, fully connected, and small-world (e.g. a lattice plus a few long-range connections). The results suggest that complete information is not always beneficial for a group, and that problem spaces requiring substantial exploration may benefit from networks with mostly locally connected individuals. We model the dissemination of innovations in these experiments using agents that probabilistically select choices guided by their own and their neighbors' explorations.
Scalable Visual Comparison of Biological Trees and Sequences
Abstract: We present the TreeJuxtaposer and SequenceJuxtaposer visualization applications for comparing and browsing evolutionary trees and genomic sequences, respectively. These systems use the Focus+Context navigational metaphor of allowing users to fluidly stretch and shrink parts of the view, as if manipulating a rubber sheet with the borders tacked down. We introduce cognitive scalability to this approach by guaranteeing the visibility of landmarks at all times, so that users can stay oriented as they explore complex datasets. In our systems, landmarks can be regions of difference between datasets, or the results of a search, or user-chosen regions. This technique, which we call "accordion drawing", supports smooth realtime transitions between a big-picture overview and a drilled-down views that show details in context. Our new PRISAD infrastructure is highly scalable, allowing fluid realtime interaction with trees of several million nodes and multiple aligned sequences of up to 40 million total nucleotides.
Large-Scale Network Analysis with the Boost Graph Libraries
Abstract: In recent years, our ability to collect network data has increased far beyond our capabilities to analyze this data. With this deluge of data, the simple, direct implementations of network analyses and data structures no longer suffice, and we must turn to more advanced techniques such as graph compression and parallel computing. This talk will introduce the Boost Graph Libraries, a set of libraries for graph and network analysis developed by the Open Systems Lab at Indiana University. The Boost Graph Libraries provide a consistent set of interfaces across the entirety of the productivity--performance spectrum, from the rapid-prototyping and visualization capabilities of BGL-Python, to the high-performance sequential BGL and cluster-capable Parallel BGL. This talk will explore the relative merits of each library, to determine which BGL may be right for your network analysis task, regardless of whether your network is measured in tens, thousands, millions, or billions.
Mapping Artistic, Cultural, and Network Assets in the Chicago Metropolitan Area: Context, Project Design, Implementation, and Initial Findings
Harold D. Green, Jr.
Abstract: The Chicago metropolitan area has, for the past few years, become a key destination for Mexican transnationals, both temporary and permanent. Post-NAFTA Mexican immigrants have combined their cultural, artistic and network resources to create hybrid behavioral and cultural forms unlike those commonly used in America or Mexico. The use of these hybrid forms allows migrants to leverage their social and cultural resources to gain access to basic assistance, jobs, social support services, and other types of group-based or group-facilitated resources. This study was conducted in conjunction with the Field Museum in Chicago. It combined innovations in ethnographic research—such as the use of Atlas Ti and other ethnographic support tools—with new techniques for egocentric social network data collection that incorporate electronic data collection and one-touch network discovery capabilities, to delve more deeply into the realities of life for the Mexican immigrant community. In the process, aspects of the widely popular ‘network theory of migration’ are investigated in more detail than has been previously possible. In this talk, I present the motivations for the project, identify the factors that led to the synthesis of ethnography and social network analysis, explain the new approaches that the research team developed and, finally, present some initial findings from the project, calling attention to how those findings correspond to current thinking vis-à-vis ‘network theory of migration’ and to the current immigration policy environment.
Supporting Visual Analysis: Perceptual, Cognitive, and Semantic Techniques
Abstract: W. Bradford Paley has deployed work in seemingly diverse settings: the Museum of Modern Art, the New York Stock Exchange, NYU Bioinformatics, the Whitney; he has won equally diverse recognition: an ID Design Distinction award, Grand Prize in Tokyo's international arts festival, engineering tool awards for input devices, fellowship in the New York Foundation for the Arts. The same principles drive all of this work: If you engage the eye, you can engage the mind--as long as you "know the protocol," and keep the message consistent.
This talk has two parts. The first part will describe a knowledge acquisition pipeline: A designer/engineer's abstraction of visual, cognitive, and semantic "protocols" that engage seven distinct layers of the visual thinking processes. The second part introduces numerous examples of Mr. Paley's work informed by these protocols. Among them are the artwork TraceEncounters which is becoming a social network analysis tool for use by real researchers; the Whitney-commissioned CodeProfiles has been mistaken for a debugging tool; the Structualist text analysis tool TextArc was "mistaken for art" and won the Tokyo New Media Festival's grand prize.