Simple Project List Software Map

Distributed Computing
171 projects in result set
LastUpdate: 2019-12-14 16:04

Diskless Remote Boot in Linux (DRBL)

DRBL provides diskless or systemless environment. It uses distributed hardware resources and makes it possible for clients to fully access local hardware. It also includes Clonezilla, a partition and disk cloning utility similar to Ghost.

LastUpdate: 2016-08-09 20:26

Xming X Server for Windows

Xming は、Microsoft Windows XP/Vista/7/8 (+ Server 2003/2008/2012) のための、すぐれたX Window サーバです。完全な機能を有し、小型で高速、簡単にインストールでき、Microsoft Windows上で単独で動作するとともに、(マシン毎にインストールすることなく)どこででも使えます。

LastUpdate: 2015-06-14 02:35

hadoop for windows

unofficial prebuild binary packages of apache hadoop for windows, apache hive for windows, apache spark for windows, apache drill for windows and azkaban for windows.

Windows で動作する Apache Hadoop の非公式のビルド済...

Development Status: 2 - Pre-Alpha
Target Users: Science/Research
Operating System: MinGW/MSYS (MS Windows), Windows 7
Programming Language: Java
Register Date: 2015-02-22 06:32
LastUpdate: 2014-06-03 08:35


JPPF makes it easy to parallelize computationally intensive tasks and execute them on a Grid.

LastUpdate: 2016-06-08 22:14


Gangliaは、クラスターやグリッドといったハイパフォーマンスコンピューティングシステムのためのスケーラブルな分散型の監視システムです。クラスターの集合を対象にした階層構造をベースにしています。 最大2000ノードのクラスタをサポートしています。

LastUpdate: 2021-01-13 22:56

Talend Open Studio for Data Integration


LastUpdate: 2012-11-06 23:43

Shared Scientific Toolbox in Java

The Shared Scientific Toolbox is a library that facilitates development of efficient, modular, and robust scientific/distributed computing applications in Java. It features multidimensional arrays with extensive linear algebra and FFT support, an asynchronous, scalable networking layer, and advanced class loading, message passing, and statistics packages.

LastUpdate: 2006-05-01 05:51


phpMyLibrary は PHP MySQL ライブラリ オートメーション アプリケーションです。プログラムは、カタログ、循環、および webpac モジュールで構成されます。プログラムはまた、インポート エクスポート機能を持っています。プログラムは、厳密に典拠のマテリアルを追加するための標準に従ってください。

(Machine Translation)
LastUpdate: 2013-07-29 22:58


Makeflow is a workflow engine for executing large complex applications on clusters, clouds, and grids. It can be used to drive several different distributed computing systems, including Condor, SGE, and the included Work Queue system. It does not require a distributed filesystem, so you can use it to harness whatever collection of machines you have available. It is typically used for scaling up data-intensive scientific applications to hundreds or thousands of cores.

LastUpdate: 2013-10-21 23:04


オーサリングおよび配信 IMS 標準に準拠した学習オブジェクトの Java で書かれたソフトウェア ツールのスイートです。* * * このサイトには、ソース コードにはのみが含まれます。バイナリ に行く * * *

(Machine Translation)
LastUpdate: 2010-06-17 07:58


DAC (Dynamic Agent Computations) is a novel software framework designed for implementing multi-agent systems that describe parallel computations. The whole system is easy to configure and extend, but also very efficient and scalable. Moreover, the technology that is used (JMS, Cajo, JMX) ensures high reliability of the framework, which can be used in a production environment.

LastUpdate: 2013-07-29 22:54

Parrot and Chirp

Parrot and Chirp are user-level tools that make it easy to rapidly deploy wide area filesystems. Parrot is the client component: it transparently attaches to unmodified applications, and redirects their system calls to various remote servers. A variety of controls can be applied to modify the namespace and resources available to the application. Chirp is the server component: it allows an ordinary user to easily export and share storage across the wide area with a single command. A rich access control system allows users to mix and match multiple authentication types. Parrot and Chirp are most useful in the context of large scale distributed systems such as clusters, clouds, and grids where one may have limited permissions to install software.

LastUpdate: 2011-03-22 04:39

Dapper Dataflow Engine

Dapper, or "Distributed and Parallel Program Execution Runtime", is a tool for taming the complexities of developing for large-scale cloud and grid computing, enabling the user to create distributed computations from the essentials: the code that will execute, along with a dataflow graph description. It supports rich execution semantics, carefree deployment, a robust control protocol, modification of the dataflow graph at runtime, and an intuitive user interface.

LastUpdate: 2010-12-14 19:35


StarCluster is a utility for creating traditional computing clusters used in research labs or for general distributed computing applications on Amazon's Elastic Compute Cloud (EC2). It uses a simple configuration file provided by the user to request cloud resources from Amazon and to automatically configure them with a queuing system, an NFS shared /home directory, passwordless SSH, OpenMPI, and ~140GB scratch disk space. It consists of a Python library and a simple command line interface to the library. For end-users, the command line interface provides simple intuitive options for getting started with distributed computing on EC2 (i.e. starting/stopping clusters, managing AMIs, etc). For developers, the library wraps the EC2 API to provide a simplified interface for launching/terminating nodes, executing commands on the nodes, copying files to/from the nodes, etc.

LastUpdate: 2008-09-30 03:12


測定 (S4PM) のためのシンプルでスケーラブルなスクリプト ベースの科学プロセッサは非常に自動化されたデータ処理の科学、大規模な処理システムまで拡張性と、小さい、特別な目的の処理文字列までスケーラブルなシステムです。

(Machine Translation)