Search

Current filters:

Search Results

Item hits:
  • Thesis


  • Authors: Sai Ranga, Prashanth C. (2006)

  • Current heterogeneous meta-computing systems, such as computational clusters and grids offer a low cost alternative to supercomputers. In addition they are highly scalable and flexible. They consist of a host of diverse computational devices which collaborate via a high speed network and may execute high-performance applications. Many high-performance applications are an aggregate of modules. Efficient scheduling of such applications on meta-computing systems is critical to meeting deadlines. In this dissertation, we introduce three new algorithms, the Heterogeneous Critical Node First (HCNF) algorithm, the Heterogeneous Largest Task First (HLTF) algorithm and the Earliest Finish Time with Dispatch Time (EFT-DT) algorithm. HCNF is used to schedule parallel applications of forms r...

  • Thesis


  • Authors: Dang, Jiangbo (2006)

  • Due to the convergence of industrial demands for business-process and supply-chain management and recent results in multiagent systems, autonomous software services, and the Semantic Web, Web services are becoming the main focus for the next generation of the Internet. They will behave like intelligent agents by composing themselves cooperatively into workflows. A workflow is a set of services that execute by carrying out specified control and data flows. Agents are persistent active entities that can perceive, reason, and act in their environment, and communicate with other agents. Agents can interact autonomously across enterprise boundaries and, when thought of as services, provide a new way to achieve programming-in-the-large. Agents interact with other agents through negotiatio...

  • Thesis


  • Authors: Jang, Myeong-Wuk (2006)

  • The growth of the computational power of computers and the speed of networks has made large-scale multi-agent systems a promising technology. As the number of agents in a single application approaches thousands or millions, distributed computing has become a general paradigm in large-scale multi-agent systems to take the benefits of parallel computing. However, since these numerous agents are located on distributed computers and interact intensively with each other to achieve common goals, the agent communication cost significantly affects the performance of applications. Therefore, optimizing the agent communication cost on distributed systems could considerably reduce the runtime of multi-agent applications. Furthermore, because static multi-agent frameworks may not be suitable fo...

  • Thesis


  • Authors: Ritchey, Ronald W. (2006)

  • Attack graphs are often used by penetration testing teams to represent the individual steps an attacker could use to compromise a network. The graphs built manually by these teams though are often incomplete and require substantial effort to create. Because of this, automated network attack graph generation is an area that has enjoyed significant research over the last several years. This dissertation further develops the body of knowledge in automated network attack graphs specifically focused on proving its usefulness to protecting real networks. I show that attack graph construction and analysis can be automated while maintaining characteristics of real networks, that attack graphs can be used to assess the resiliency of candidate enterprise networks and that attack graphs can be...

  • Thesis


  • Authors: Agarwal, Sachin Kumar (2006)

  • The unifying theme of this work is to provide new scalable solutions comprising algorithms, protocols, and data-structures, for solving data synchronization and set difference estimation problems. These problems are are repeatedly encountered in distributed systems and solving them efficiently directly affects the scalability of the distributed system, i.e., how many network hosts can participate in the distributed system. Our new solutions, if deployed, can significantly reduce communication, computational overhead, and meta-data stored on hosts as compared to currently used approaches for data synchronization and set difference estimation. Modern distributed network applications often utilize a wholesale data trans¬fer protocol known as "slow sync" for reconciling data on constit...

  • These


  • Authors: You, Yuqiu (2006)

  • The top problems faced by manufacturing enterprises to implement system integration solutions are confusing solutions and terminology, the lack of understanding of cross-domain technologies, and the lack of business justification. In this study, a generalized web-based partial module was established to interface between manufacturing control functions and higher management level functions in a manufacturing enterprise. It is composed of three parts, a LabVIEW-based data collector, a system server, and a web-based interface. The data collector was constructed as an open source system module for data collection from LabVIEW-based control applications. It can be integrated into LabVIEW Vls without requiring extra system resource from the control server. The web interface and data struc...

  • Thesis


  • Authors: Marino, Mark Christopher (2006)

  • Amidst the various forms of electronic literature stands a class of interactive programs that simulates human conversation. A chatbot, or chatterbot, is a program with which users can "speak," typically by exchanging text through an instant-messaging style interface. Chatbots have been therapists, Web site hosts, language instructors, and even performers in interactive narratives. Over the past ten years, they have proliferated across the Internet, despite being based on a technology that predates the Web by thirty years. In my readings, these chatbots are synedochic of the process by which networked identities form on the Internet within the power dynamics of hegemonic masculinity. Chatbots, in this light, model the collaborative performance humans enact on electronically-mediated ...

  • Thesis


  • Authors: Morgan, Grayson B. (2006)

  • As the Department of Defense (DoD) moves to create decision superiority through the use of network centric operations, decision-makers become increasingly exposed to data quality vulnerabilities that mimic integrity failures within the Information Assurance (IA) security program. This outcome is a result of the current IA enforcement mechanism known as Defense Information Assurance Certification and Accreditation Process (DIACAP) not begin designed to mitigate risks involving data quality (accuracy, relevance, timeliness, usability, completeness, brevity, and security) beyond those associated with security. Misconceptions and the uncertainty created by the inability of decision-makers to distinguish between poor quality data and breaches of IA integrity lead to a reduced trust of cr...

  • Thesis


  • Authors: Lakhina, Anukool (2006)

  • To attack this problem we adopt the general strategy of seeking low-dimensional approximations that preserve important traffic properties. Our starting point, and the first contribution of this dissertation, is to demonstrate that accurate low-dimensional approximations of network traffic often exist. We show that network-wide traffic measurements that exhibit as many as hundreds of dimensions can be approximated well using a much smaller set of dimensions (for example, less than ten). This observation of low effective dimensionality is key, and provides leverage on a number of problems related to network operations. In particular, low effective dimensionality leads us to make use of subspace methods. These methods systematically exploit the low dimensionality of multi-feature traf...