How deep the machine learning can be.

2020 
Today we live in the age of artificial intelligence and machine learning; from small startups to HW or SW giants, everyone wants to build machine intelligence chips, applications. The task, however, is hard: not only because of the size of the problem: the technology one can utilize (and the paradigm it is based upon) strongly degrades the chances to succeed efficiently. Today the single-processor performance practically reached the limits the laws of nature enable. The only feasible way to achieve the needed high computing performance seems to be parallelizing many sequentially working units. The laws of the (massively) parallelized computing, however, are different from those experienced in connection with assembling and utilizing systems comprising just-a-few single processors. As machine learning is mostly based on the conventional computing (processors), we scrutinize the (known, but somewhat faded) laws of the parallel computing, concerning AI. This paper attempts to review some of the caveats, especially concerning scaling the computing performance of the AI solutions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    3
    Citations
    NaN
    KQI
    []