====== Resources ====== ====== Resources ====== **Lab Computing Servers**: * //**data**//: 10-core CPU (i9-7900X) with 128GB memory + 4 GPUs ( 4 x GTX1080 Ti, 11 GB memory each) * //**knowledge**//: 10-core CPU (i9-7900X) with 128GB memory + 4 GPUs ( 4 x GTX1080 Ti, 11 GB memory each) * //**image**//: 6-core CPU (i7-6800K) with 128GB memory + 4 GPUs (4 x GTX 1080, 8GB memory each) * //**text**//: 6-core CPU (i7-5820K) with 64GB memory + 4 GPUs (4 x TITAN X, 12GB memory each) * //**video**//: 6-core CPU (Xeon E5-1650) with 64GB memory + 3 GPUs (1 x Tesla K40c* + 2 x TITAN X, 12GB memory each) * //**language**//: 6-core CPU (Xeon E5-2620) with 64GB memory + 2 GPUs (2 x TITAN X, 12GB memory each) * //**music**//: 6-core CPU (Xeon E5-1660) with 64GB memory + 2 GPUs (2 x TITAN X, 12GB each) * //**voice**//: 6-core CPU (Xeon E5-2620) with 64GB memory + 2 GPUs (GTX 1080 + GTX 1070, 8GB memory each) * //**audio**//: 4-core CPU (i7-950) with 16GB memory + 2 GPUs (2 x GTX1070, 8GB memory each) * //**speech1**//: 4-core CPU (i7-6700) with 16G memory * //**speech2**//: 4-core CPU (i7-6700) with 16G memory + 1 GPU (GTX 780 Ti 3GB) * //**speech3**//: 4-core CPU (i7-6700) with 32G memory + 2 GPU (GTX 1070 8GB + GTX 780 Ti 3GB memory each) * //**speech4**//: 4-core CPU (i7-6700) with 32G memory + 2 GPUs (GTX 1070 8GB memory each) * //**speech5**//: 4-core CPU (i7-6700) with 32G memory + 2 GPUs (GTX 1070 8GB memory each) * //**speech6**//: 4-core CPU (i7-6700) with 32G memory + 2 GPUs (GTX 1070 8GB memory each) * //**speech7**//: 4-core CPU (i7-6700) with 32G memory + 2 GPUs (2 x GTX 1070 8GB memory each) * //**speech8**//: 4-core CPU (Xeon W-2125) with 32G memory + 256GB SSD + 1 GPU (GTX 1080 8GB memory) * //**speech9**//: 4-core CPU (Xeon W-2125) with 32G memory + 256GB SSD + 1 GPU (GTX 1080 8GB memory) * //**speech10**//: 4-core CPU (Xeon W-2125) with 32G memory + 256GB SSD + 1 GPU (GTX 1080 8GB memory) ** Check [[:gpu_assignment|the GPU assignments]] before you submit your jobs to GPUs. [[:check_workload_cpu_gpu|Check here]] for how to check the workloads. Never overload any machines. ** **Storage Space**: * The shared network disk (with backup) is located at ///cs/research/asr//. * Each of the above machines has a local disk space (several TB's each) at ///local// (without any backup). In addition, more local disk space can be accessed from Linux directory ///tmp//. - **text**: ///local/scratch// (6 TB) - **video**: ///local/scratch// (6 TB); ///local/scratch1// (6 TB) - **music**: ///local/scratch// (2 TB); ///local/scratch1// (6 TB) - **language**: /local/scratch (1 TB) - **voice**: /local/scratch (1 TB); /local/scratch1 (2 TB) - **audio**: /local/scratch (1 TB) - **speech1** to **speech7**: /local/scratch (500 GB each). **Software**: * **TensorFlow**: see [[:tensorflow|here]] (provided by Mingbin) for how to configure TensureFlow. Alternative, you may configure TensorFlow as [[:tensorflow2|here]] (provided by Chao). * TensorFlow Tutorials: - [[https://m.youtube.com/watch?feature=youtu.be&v=vq2nnJ4g6N0|Tensorflow and deep learning - without a PhD by Martin Görner]] - [[https://m.youtube.com/watch?v=Ejec3ID_h0w&feature=youtu.be|TensorFlow Tutorial (Sherry Moore, Google Brain)]] - [[http://www.wildml.com/2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/|RNNs in Tensorflow, a Practical Guide and Undocumented Features – WildML]] - [[http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/21/tfrecords-guide/|Tfrecords Guide]] ** * Acknowledgement**: the Tesla graphical card was donated by NVIDIA Inc.