The talk by Oliver Guhr was very well attended by 43 participants. Thank you again to Oliver to deliver such a wonderful description of Transformers and a step-by-step discussion of the attention mechanism.
To capture the community and learn of their needs, we conducted two polls with all logged in users. Each poll asked participants a question to which they were meant to pick one single answer.
What is the primary data type that you use ML for?
|10%||3D volumes (voxel based data)|
|4%||3D volumes over time|
It is interesting but perhaps unsurprising to see that most people appear to use machine learning (ML) for images (
40%). An astonishing amount of
25% couldn’t find a description of their dataset in the allowed answers.
How much hardware do you use for (training) machine learning?
|7%||one computer (worksation, laptop, server)|
|31%||one computer with a GPU inside|
|13%||one computer with multiple GPUs inside|
|4%||multiple computers (without GPU usage)|
|19%||multiple computers with one GPU each|
|31%||multiple computers with multiple GPU each|
Here again, the results share something unsurprising:
50% of our attendees use some kind of HPC or cloud infrastructure as they use multiple computers at the same time with one GPU each. It is interesting to see that almost a comparable ration of attendees do not do so,
44% answered that they use one computer with at least 1 GPU inside .
To make the seminar more interactive for everyone, we set up interactives notes. We used them to connect to eachother, submit questions, add further material for the talk. These notes with additional links and resources are available for download as markdown formatted text file.
Slides are available as pdf file at here.