Research
Advanced computational intelligence is revolutionalizing our daily lives. But how do they make sense of the complex world? How do they generate realistic videos, form abstract analogies, and understand human language? I’m unboxing their brain, looking for key mechanisms that allow machine intelligence to learn simplicities—generalizable simple regularities—in complexities.
- Keywords: Network Science, Representation Learning
Recent work
Network community detection via neural embeddings
Our paper on the detectability limit of neural embeddings is finally out from @NatureComms! We showed that a simple shallow neural net w/o non-linear activation can achieve the optimal community detectability limit. Let's dive in! @santo_fortunato @filrad https://t.co/DJM1NgDEfl
— Sadamori Kojaku (@skojaku) November 8, 2024
Implicit degree bias in the link prediction task
🚨Paper Alert🚨 Benchmarks guide #MachineLearning, but is the core benchmark for #GraphML, the link prediction task, guiding us correctly? With @RachithAiyappa @VisonWang1 @ozgurcanseckin @snetsMJ @JisungYoon8 and YY Ahn, we question its validity. Dive in! https://t.co/g5jQRC1yHY
— Sadamori Kojaku (@skojaku) May 28, 2024