online gambling singapore online gambling singapore online slot malaysia online slot malaysia mega888 malaysia slot gacor live casino malaysia online betting malaysia mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 mega888 How to use TensorFlow in your browser

摘要: Take advantage of TensorFlow.js to develop and train machine learning models in JavaScript and deploy them in a browser or on Node.js

 


KTSimage

▲圖片標題(來源:Getty Images)

While you can train simple neural networks with relatively small amounts of training data with TensorFlow, for deep neural networks with large training datasets you really need to use CUDA-capable Nvidia GPUs, or Google TPUs, or FPGAs for acceleration. The alternative has, until recently, been to train on clusters of CPUs for weeks.

One of the innovations introduced with TensorFlow 2.0 is a JavaScript implementation, TensorFlow.js. I wouldn’t have expected that to improve training or inference speed, but it does, given its support for all GPUs (not just CUDA-capable GPUs) via the WebGL API.

What is TensorFlow.js?

TensorFlow.js is a library for developing and training machine learning models in JavaScript, and deploying them in a browser or on Node.js. You can use existing models, convert Python TensorFlow models, use transfer learning to retrain existing models with your own data, and develop models from scratch.

TensorFlow.js back ends

TensorFlow.js supports multiple back ends for execution, although only one can be active at a time. The TensorFlow.js Node.js environment supports using an installed build of Python/C TensorFlow as a back end, which may in turn use the machine’s available hardware acceleration, for example CUDA. There is also a JavaScript-based back end for Node.js, but its capabilities are limited.

In the browser, TensorFlow.js has several back ends with different characteristics. The WebGL back end provides GPU support using WebGL textures for storage and WebGL shaders for execution, and can be up to 100x faster than the plain CPU back end. WebGL does not require CUDA, so it can take advantage of whatever GPU is present.

The WebAssembly (WASM) TensorFlow.js back end for the browser uses the XNNPACK library for optimized CPU implementation of neural network operators. The WASM back end is generally much faster (10x to 30x) than the JavaScript CPU back end, but is usually slower than the WebGL back end except for very small models. Your mileage may vary, so test both the WASM and WebGL back ends for your own models on your own hardware.

詳見全文Full Text: InfoWorld

若喜歡本文,請關注我們的臉書 Please Like our Facebook Page:    Big Data In Finance

 


留下你的回應

以訪客張貼回應

0
  • 找不到回應

YOU MAY BE INTERESTED