Matches in Nanopublications for { ?s <http://www.w3.org/2000/01/rdf-schema#comment> ?o ?g. }
- assertion comment " Enjoying this book a lot https://books.google.com/books?id=KpFKwgEACAAJ&newbks=0&hl=en " assertion.
- assertion comment " This should be nanopublished https://dl.acm.org/doi/10.1145/3511095.3536361 " assertion.
- assertion comment " will definitely be checking this paper out https://twitter.com/plevy/status/1831377298714747273 " assertion.
- assertion comment " What can be done about this, besides joining a local debate club? disagreeability, advocacy and argumentation are hard skills to develop. source paper: https://online.ucpress.edu/collabra/article/10/1/121937/202992/Changes-in-Need-for-Uniqueness-From-2000-Until https://twitter.com/BrandonWarmke/status/1823753977365999964 " assertion.
- assertion comment " I recently switched to Mandarin Blueprints as my primary method of learning mandarin. So far I'm blown away at the course structure and the approach they take to teaching mandarin (and teaching in general, tbh). https://www.mandarinblueprint.com/ They focus on three main components to effective fluency development: - comprehensible input - spaced repetition flashcards - elaborate mnemonic memory palace techniques I've never come across any kind of educational program that so effectively integrates these powerful learning techniques into the core of the curriculum. This type of course structure goes far beyond language learning and I hope we see this kind of approach more broader across all sorts of learning environments (including higher ed) It's sad thinking about how inefficient and ineffective so many of our school systems are at promoting and guiding deep knowledge acquisition for students. So much lost intellectual potential of entire populations. " assertion.
- assertion comment " Subtle but very important observation. Not to overlabor the metaphor, but we're like the frog trying to detect whether itβs being boiled with the twist that at the same time our brain is also being boiled. Reminds me of some intriguing related research on "machine epistemology" https://twitter.com/rtk254/status/1831664576754045156/photo/1 https://twitter.com/erikphoel/status/1831351806099664973 source https://journals.sagepub.com/doi/10.1177/20539517231188725 " assertion.
- assertion comment " This is a good recommendation https://twitter.com/sense_nets/status/1831307092575031539 " assertion.
- assertion comment " Human feedback is critical for aligning LLMs, so why donβt we collect it in the open ecosystem?π§ We (15 orgs) gathered the key issues and next steps. Envisioning a community-driven feedback platform, like Wikipedia https://www.alphaxiv.org/abs/2408.16961 π§΅ https://twitter.com/LChoshen/status/1831708316982231235/photo/1 We define 5 axes of openness: Methodology (how its collected) Access (who can use it) Models (one\many) Contributors (as diverse as its uses?) Time (keeps updating? closed models improve over several feedback iterations, and of course, models change) Is current feedback open?π₯Ά https://twitter.com/LChoshen/status/1831708319855362549/photo/1 In our paper, we first learn from peer production efforts like wiki and stack overflow. These case studies tell us how important it is to align incentives of different bodies, allow the community to dictate the policies, etc. Then, we hone on 6 crucial areas to develop open human feedback ecosystems: Incentives to contribute, reducing contribution efforts, getting expert and diverse feedback, ongoing dynamic feedback, privacy and legal issues. https://twitter.com/LChoshen/status/1831708324284522965/photo/1 We believe a successful ecosystem must center around feedback loops where anyone can spin up a community model, for storytelling, Bengali or anything else Others can use it, give feedback, and benefit from a model that keeps improving with the contributions https://twitter.com/LChoshen/status/1831708326717161683/photo/1 The feedback from all models will be open and collected in one pool, helping beyond the specialized models created to future research and general improvement This was a huge effort and the paper is packed with ideas thanks to: #deepRead @Shachar_Don @ben_burtenshaw @RamonAstudill12 @cailean_osborne @MimansaJ @tzushengkuo @wzhao_nlp @IdanShenfeld @TheAndiPenguin @Yurochkin_M @Dr_Atoosa @YangsiboHuang @tatsu_hashimoto @YJernite @dvilasuero @AbendOmri @jen_gineered @sarahookr @hannahrosekirk Note, we don't only preach open, this was open with contributions from so many organizations @CohereForAI @MITIBMLab @IBMResearch @huggingface @nlphuji @UniofOxford @MIT_CSAIL @StanfordHAI @turinginst @princeton_nlp @cmuhcii @EdinburghUni @cornell_tech Please ask us anything, share, discuss and talk to us, we are going to make it real! Together! Much much more in the paper: https://www.alphaxiv.org/abs/2408.16961 " assertion.
- assertion comment " An interesting finding on the connection between directions and multiple fine-tunings of different tasks (sequentially) https://twitter.com/danie1marczak/status/1831277841196912958 " assertion.
- assertion comment " I've been waiting for it for so long https://twitter.com/KaiserWhoLearns/status/1823800506504081884 " assertion.
- assertion comment " In our new preprint "Properties of Effective Information Anonymity Regulations", we formalize core principles for anonymization & prove that common interpretations of GDPR anonymization fail to protect information privacy reliably: < https://arxiv.org/abs/2408.14740?utm_source=dlvr.it&utm_medium=twitter > " assertion.
- assertion comment " Our new preprint with @philipncohen -- "The State of Sociology: Evidence from Dissertation Abstracts" https://osf.io/preprints/socarxiv/a8uyp?utm_source=dlvr.it&utm_medium=twitter . Philip summarizes here : https://mastodon.social/@philipncohen/112962549677291692?utm_source=dlvr.it&utm_medium=twitter " assertion.
- assertion comment " Many thanks the ACM publishing folks and tech policy committee for supporting our new policy brief: https://twitter.com/acmpolicy/status/1814271955077333165 " assertion.
- assertion comment " Useful: "Statistical Challenges in Online Controlled Experiments: A Review of A/B Testing Methodology" https://www.tandfonline.com/doi/full/10.1080/00031305.2023.2257237 Not just for "online". Great summary of methods that can be used to reliably inform business/organizational practice. " assertion.
- assertion comment " Peeling back the layers of our #energyblindness. We love to talk about productivity gains of automation, but we rarely talk about the energy efficiency loses of replacing human labour. More about replacing human labour with fossil energy and implications: https://read.realityblind.world/view/975731937/192/#zoom=true https://twitter.com/wesleyfinck/status/1806061359773200699/photo/1 https://x.com/wesleyfinck/status/1657956908571979776 " assertion.
- assertion comment " We've been hard at work trying to revitalize social media for scientists. Let us know if you are interested and what kind of features you want + share with your network, thx! https://twitter.com/rtk254/status/1803100275990794566 " assertion.
- assertion comment " Active inference models of science π https://arxiv.org/abs/2409.00102 @InferenceActive " assertion.
- assertion comment " https://x.com/_angie_chen/status/1796220428345573399 https://x.com/LChoshen/status/1816819564002304091 " assertion.
- assertion comment " TIL - "Sympoiesis" Like autopoiesis, but symbiotic organizing instead of self-organizing https://twitter.com/rtk254/status/1831874637552312530/photo/1 Source: https://www.dukeupress.edu/staying-with-the-trouble " assertion.
- assertion comment " I'm quite interested in knowledge graphs x LLMs but these guys are miles ahead thinking about moral graphs x LLMs. Super interesting work on aligning AI https://www.meaningalignment.org/research/new-paper-what-are-human-values-and-how-do-we-align-ai-to-them " assertion.
- assertion comment " Excellent critical review. This in particular tracks. I took a course with him, and while I don't remember anything from the course, I do remember being struck by his brazenness to ignore entire academic disciplines that were inconvenient for the stories he was trying to sell. https://twitter.com/rtk254/status/1832576205955923984/photo/1 https://twitter.com/daniel_dsj2110/status/1832026107945570404 " assertion.
- assertion comment "The content of this nanopublication looks good from a formal point of view and seems to match the description in the manuscript (under review). The ZoobBank URI (https://zoobank.org/NomenclaturalActs/7ad8f87f-e7c1-4094-bd63-7662f167e9cb) doesn't resolve, which I suppose is because this entry hasn't been approved/published yet. So, this is likely not an issue (but I cannot check)." assertion.
- assertion comment "The taxon name should be refered to by its ZooBank URL (https://zoobank.org/NomenclaturalActs/7ad8f87f-e7c1-4094-bd63-7662f167e9cb) as in the other nanopublication (this one: https://w3id.org/np/RANfU_7tS66XyfZS4RauqKLLnSEcfa1L06ueqKGLMU9TA), and not as a locally minted identifier. Otherwise, this nanopublications looks good on the formal side, and seems to match the content of the manuscript (under review) as much as I can tell as a non-expert of the field." assertion.
- assertion comment "It seem that the statement that is meant here is to link the *taxon* to the given habitat, and not a specific *organism* of that taxon. In that case the other template "Association between taxa and environments" should be used. But maybe I am wrong and the authors really want to talk about a specific organism of that taxon they found in the given habitat. In this case, the nanopublications is alright. The rest looks good." assertion.
- assertion comment " HELM evaluation (@StanfordHAI @stanfordnlp @percyliang ) Announces the integration of Unitxt For more datasets, easier integration of new datasets sharable and reproducible pipelines and more Kudo @ElronBandel & @YifanMai https://twitter.com/LChoshen/status/1833134684752204287/photo/1 The blogpost: https://crfm.stanford.edu/2024/09/05/unitxt.html More on Unitxt https://github.com/IBM/unitxt More on HELM https://crfm.stanford.edu/helm/ " assertion.
- assertion comment " One thing I didn't really think about before this, when I talk to a model and it generates an image, the image is mine. Why isn't it the case when it is text? Well, isn't it? t\h @RamonAstudill12 @YangsiboHuang others https://twitter.com/LChoshen/status/1831708316982231235 " assertion.
- assertion comment " Very important work! And very brave testimonies. It really shows the difference between wisdom and intelligence, individual and collective. And how society can self destruct, anything but to accept reality.. https://adultsintheroom.libolibo.me/ " assertion.
- assertion comment " π‘This sparks thoughts for another application of LLMs in science. Recover the experimental materials from a description of them in the paper by asking an LLM to code the software program. Would be particularly useful in psychology. #metascience #research https://twitter.com/emollick/status/1805342038612918335 " assertion.
- assertion comment " Study/exp design tools are on my mind. They seem valuable for preventing errors, but their scope seems narrow. π€What if... Planning tools recommended software, hardware, project management? We extended them to planning qual and multi/mixed methods studies? Pic: @NC3RS EDA https://twitter.com/metasdl/status/1800680071067574623/photo/1 " assertion.
- assertion comment " I have a challenge for you #metascience and #openscience : Where can I go online to find examples of great papers (clear, rigorous, theorized soundly, etc.)? I'm looking for the opposite of RetractionWatch. " assertion.
- assertion comment " Sharing a blog post, I wrote about where AI tech should allocate funds if they are earnest about "aligning AI tech with Humanity" and my attempted experience applying to one of their grants. #OpenAI #DataCommons #Governance https://medium.com/@shahar.r.oriel/pathways-to-ai-governance-an-alternative-grant-and-proposal-a2768e75d13 " assertion.
- assertion comment " AGI is right around the corner π https://twitter.com/rtk254/status/1833917330189156749/photo/1 https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.COSIT.2024.28 " assertion.
- assertion comment " You only learn a few parameters, with your parameter "efficient" finetuning. The rest isπ© A whole line of worksπ§΅ shows that by throwing redundancy we can get better LoRas, keep less memory and of course model merge https://twitter.com/LChoshen/status/1833879920348422216/photo/1 ComPeft shows you can improve LoRAs by pruning aggressively and making the remaining weights binary (+/-) It also means parameter efficiency still relies on overparametrization(but only during training) https://x.com/prateeky2806/status/1727589818618523783 Laser shows it on full models https://x.com/pratyusha_PS/status/1739025292805468212 https://twitter.com/LChoshen/status/1833879922500080084/photo/1 In merging, many find that with only those few weights one can make a "multitask" model, keeping the important ones for each model and switching. those e.g. 1% of the weights also represent tasks well Many.. https://www.alphaxiv.org/abs/2408.13656 https://www.alphaxiv.org/pdf/2405.07813 https://www.alphaxiv.org/pdf/2310.01886 Those works are focused on efficient multitask learning that compresses the models, can keep many models and switch between them as necessary. Another option to compress is to SVD the LORA, separately or to a shared space, saving the tiny differences https://x.com/RickardGabriels/status/1810368375455207470 And just because we discussed compression, of course this is all just "model compression" if you want to compress to just save space, there are smarter ways: https://github.com/zipnn/zipnn " assertion.
- assertion comment " AI alignment debates would make a lot more sense if they were debates about aligning corporations https://twitter.com/rtk254/status/1832769861728030803 " assertion.
- assertion comment " Striking that this proposal is converging to the hybrid architecture of the only instance of a generally intelligent system we have so far (biological brains): System 1 + System 2 (broadly construed). https://twitter.com/kasratweets/status/1806486047750009336 Makes sense if we're trying to build new instances of intelligence that are adaptive in a similar fitness landscape. Wonder what we'd converge to in other environments? " assertion.
- assertion comment " Not every day you get to help science... Know (or released) such a dataset? (BTW not my research I am helping them as well) https://twitter.com/NitCal/status/1833917233564880969 Possibly interested people @yufanghou @AndreasWaldis @neuranna " assertion.
- assertion comment " One thing I've learnt from @RamonAstudill12 model releases RLHF multiple times! We can't even train once on open data... https://twitter.com/LChoshen/status/1831708316982231235 " assertion.
- assertion comment " @Julius_Steuer @mariusmosbach @dklakow Response: The long term attention makes that https://x.com/devarda_a/status/1824065227916230848?t=JI_rNp3_ZXRLFFu0WJ5R2g&s=19 " assertion.
- assertion comment " Is there a good platform to engage in discussion about a paper in a group? I had in mind something like PDF annotations,but that you can read the PDF without logging and there are threads of discussions like Google Docs, but other formats are welcome == alternative to arXiv links @rtk254 sounds like you might know " assertion.
- assertion comment " I just had the most marvelous chat with @CohereForAI-R, Claude and Latex (next tweet) I asked +R to explain a latex error, and it failed. NP, happens Claude gave a partial direction, sending me back to latex In latex I found the error, but was too lazy to solve so ... 1/2 https://twitter.com/LChoshen/status/1817455710365495492/photo/1 I gave Claude the additional feedback and it Solved it! Then came the surprising moment In parallel, I also told R about my discussions with Claude, and it recognized the feedback perfectly! If only I could improve models that way... (data is collected by https://sharelm.github.io/) https://twitter.com/LChoshen/status/1817455713054077372/photo/1 So maybe natural feedback extraction would be able to use it one day? https://x.com/LChoshen/status/1813662203532263467 P.S. Think if we didn't only save chats and feedback and extracted, but UIs would have asked: I noticed you had a problem, was it solved eventually? How? Like stack overflow for our own (open, mind you) specialized models. I'd be happy to teach them if it was used to help others " assertion.
- assertion comment " #ICML2024 Weight decay regularizes weights to be low-rank + increases alignment between laters Good labels end with low rank middle layer weights This doesn't happen with random labels https://openreview.net/forum?id=u3sssLLu4y&referrer=%5Bthe%20profile%20of%20Samuel%20Wheeler%5D(%2Fprofile%3Fid%3D~Samuel_Wheeler1) @kkpatelnmh @PedroSavarese @iammattwalters https://twitter.com/LChoshen/status/1816819564002304091/photo/1 " assertion.
- assertion comment " #700 after lunch https://twitter.com/LChoshen/status/1815631742323093885 " assertion.
- assertion comment " #2616 after lunch https://twitter.com/LChoshen/status/1815631744407654878 " assertion.
- assertion comment " Postdoc on the mathy side of LLM (mech.) interpretability and model understanding to join us? https://twitter.com/JustinMSolomon/status/1814793639945494800 People that may know people @amuuueller @megamor2 @lena_voita @boknilev @nsaphra " assertion.
- assertion comment " #icml2024 paper: how are LLMs used in reviews? 10% of ICLR sentences are auto-generated. More LLM usage when submitting later Less when referring to at least one other paper https://arxiv.org/abs/2403.07183 @Stanford and @nec Many authors: https://twitter.com/LChoshen/status/1816041318344249749/photo/1 As Stanford likes, authors take a full tweet :-) @liang_weixin @yaohuiz3 @hay_lepp @CaoHancheng @xuandongzhao @ChenLingjiao @haotian_yeee @ShengLiu_ @EyubogluSabri " assertion.
- assertion comment " best #icmi2024 position: 103 datasets that claim to be more diverse, are not. Diversity claims are subjective, political and not tested, instead of claiming, let's measure. But how? @dorazhao9 @SciOrestis @alicexiang https://arxiv.org/abs/2407.08188 https://twitter.com/LChoshen/status/1816031646568583532/photo/1 Basically, like we evaluate everything else. Measure one thing at a time (don't also test a new model) Have a specific claim (is it language diverse, background,origin) and quantify it Separate it from other constructs like how much data was collected or whether it is biased https://twitter.com/LChoshen/status/1816031649416556577/photo/1 " assertion.
- assertion comment " Do I need to introduce you to KTO? One view of it is that you don't need pairs for RLHF #icml2024 https://twitter.com/winniethexu/status/1815297532953555031 " assertion.
- assertion comment " .@soumithchintala in his opening remarks: Close modelling vs opening is just your assumption on how far is AGI We should care more how others see us and we should fill the gaps in open models ecosystem. Mainly open human feedback For which he states a main missing component We need a "sink" to pool all the feedback into one place without it costing anything to contribute. I agreed, until we created https://huggingface.co/datasets/shachardon/ShareLM Use it improve it or base on it,π€ takes no money for hosting He adds another problem,coordinating, UI, feedback, sink hosting, No worries we are on it, if you are interested in building such a thing or have thoughts comment or DM If you are too lazy, maybe just share the feedback you already give for the open? https://sharelm.github.io/ Btw no complaints for soumith of course, he's great and until someone tries. You need to be relevant to be even eligible for disagreement (as cancel culture and extremism sadly teach us) " assertion.
- assertion comment " Feedback is so natural, we already give it during a β of the chats, and now we can use it (170K human feedback ) https://twitter.com/Shachar_Don/status/1813578072593150137 " assertion.
- assertion comment " Reading papers before they are trendy, Sharing knowledge even if not a self advertisement (Sharing a work I was excited aboutπ³) Let's all be Kawin https://twitter.com/ethayarajh/status/1813292645340573839 " assertion.
- assertion comment " Thoughts in psycholinguistics after the BabyLM challenge https://twitter.com/weGotlieb/status/1813506588155723807 " assertion.
- assertion comment " Stealthily and steadily Unitxt grows https://twitter.com/seirasto/status/1813265905109070074 " assertion.
- assertion comment " A thread of unusual NLP for social good (IMO, add your own) To help agriculture where technology and information reaches less. Yes robots are cool, but 783M people in the world experience hunger google says https://x.com/LChoshen/status/1810675837332869409 A Chatbot for Asylum-Seeking Migrants in Europe to "help migrants identify the highest level of protection they can apply for" https://arxiv.org/pdf/2407.09197 " assertion.
- assertion comment " AC POV Filling holes β οΈemergency reviewsβ οΈ are so stressful Reviewer accidents are rarely last minute, but they understandably don't run to update me, right? AC has no way of knowing, except if missing a sign of life during the review period *ACL checklist is that, so helpful! So, yes, those checklists are beuorocratic timewaste (PCs supposedly desk reject given those? How often?) But, for ACs this (or any check this box if everything is fine) is the best thing that could happen. What do other conferences have? What are your thoughts? " assertion.
- assertion comment " "Vision LMs fail on 7 absurdly easy visual tasks identifying whether two circles overlap; two lines intersect; which letter is being circled in a word; counting the circles in an Olympic-like logo..." Do I need to explain any further? @Pooyanrg @anh_ng8 https://arxiv.org/abs/2407.06581 " assertion.
- assertion comment " Pretraining data mixture is the secret sauce, so tells us the open not-open llama models. This beats DoReMi x10 Research pretraining, its impactful and rare https://twitter.com/sivil_taram/status/1810697629074067640 " assertion.
- assertion comment " A great place for #benderRule ... And this is all in English: https://twitter.com/FeiziSoheil/status/1810706469626535975 https://x.com/MLMazda/status/1808508877983617181?t=XRLNjnXTvUp1IuvHIF3AUA&s=19 " assertion.
- assertion comment " I often wonder if what I do helps the world, enough? I remember @harari_yuval & @dataspade describing agriculture as a crucial field for social good. This group bridges agricultural information gaps, with NLP! https://arxiv.org/abs/2407.04721 @pratinavseth @adi_kasliwal @labnol https://twitter.com/LChoshen/status/1810675837332869409/photo/1 " assertion.
- assertion comment " LoRAs have a lot in similar. So one can compress (+-SVD with unique s) them together, serve efficiently or understand their shared spaces https://twitter.com/RickardGabriels/status/1810368300045709398 " assertion.
- assertion comment " Want to study different LoRAs? Merging? Task dependence? https://twitter.com/RickardGabriels/status/1810368226154684598 " assertion.
- assertion comment " How do datasets size affect model weights? Mainly norm growth but also eigen values Important resource: 2k image loras (for text loras there's https://huggingface.co/Lots-of-LoRAs ) https://twitter.com/MohammadSalaama/status/1806619254659182894 " assertion.
- assertion comment " To reduce evaluation contamination @XuanmingZhang07 @Zhou_Yu_AI @columbianlp et al. convert dataset examples into templates(Fig.) https://arxiv.org/abs/2406.17681 EWOK datasets are built to have this trait https://x.com/neuranna/status/1791465842632454184 Interesting trend will it last? solve contamination? https://twitter.com/LChoshen/status/1806396147281637645/photo/1 @XuanmingZhang07 @Zhou_Yu_AI @columbianlp If you ask me, a nice step, but it only solves the worst contamination (clear training on the test set). Not on just training on similar formats, synthetic data etc. to improve. So it is a good approach that should last, but we need more. (@deliprao you had similar claim right?) " assertion.
- assertion comment " LLM representations align with brain FMRIs, but not always to the same extent. When do they match: https://twitter.com/bkhmsi/status/1805595993284415913 " assertion.
- assertion comment " Makes one thinks https://twitter.com/OwainEvans_UK/status/1804182787492319437 " assertion.
- assertion comment " Evolver, model merging in a genetic algorithm Improves on current merging techniques (my beloved TIES π«£ ) Train diverse models Merge regularly or take diff between two models Update some parameters Keep if good Repeat https://arxiv.org/abs/2406.12208 @jingli9111 @banting_liu @576gsk https://twitter.com/LChoshen/status/1803410440535326786/photo/1 Merging is aimed at taking many models and getting one that generalizes better, there are various methods for it, read more e.g. on TIES https://x.com/prateeky2806/status/1665759148380758022 Genetic algorithms evolve models, in steps: Create mutations (here new m = m_old + a(m_1-m_2)) m are models a some constant Crossover, take some of the mutation and apply it, for each parameter randomly keep m_old or update to m_new Survive, keep only the best performing on val By sometimes merging and sometimes evolving (and dev sets) they improve over all current methods https://twitter.com/LChoshen/status/1803410445635653960/photo/1 In some sense, this can be seen as a better search in the region between the merged models, which we know is not equally good but all better than the edges https://x.com/LChoshen/status/1729488495515713672 https://twitter.com/LChoshen/status/1803410447246250483/photo/1 " assertion.
- assertion comment " ABOLISH THE VALUE FUNCTION https://twitter.com/micahgallen/status/1832019686101291361 " assertion.
- assertion comment " Sincerely expecting to either read or derive myself over the next several years that fold-change detection with a normalizing feedback input amounts to prediction-error (grad-of-log-prob) signaling. https://twitter.com/drmichaellevin/status/1832449357095829681 "Grad of log of SOMETHING" has already been shown in previous work, hence my expectation that it's not gonna take long to show how normalization can get done. " assertion.
- assertion comment " πππππππππ https://twitter.com/du_yilun/status/1757072068133220728 " assertion.
- assertion comment " So is this actually experimental evidence that inter-areal communication uses an unweighted, linear read-out of the spike trains on the incoming synapses? https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007692 https://twitter.com/ShahabBakht/status/1827165318885572819 " assertion.
- assertion comment " TL;DR: https://twitter.com/EliSennesh/status/1823552658319552917/photo/1 https://twitter.com/EliSennesh/status/1823164927248318565 " assertion.
- assertion comment "Serialized representation size benchmark for grouped streams, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Grouped streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming serialization throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment "Flat streaming deserialization (parsing) throughput benchmark, comparing Jelly to W3C serializations implemented in Apache Jena, as well as Jena's own binary formats. The benchmark was run on a modern x86-64 workstation." assertion.
- assertion comment " Wrote a short and light blog post about how today's AI models, specifically LLMs, align with the popular AI narrative created by last century SciFi literature. https://medium.com/@shahar.r.oriel/a-plot-twist-in-the-ai-narrative-757dbc141543 #ArtificialIntelligence #LLMs #GPT4 #scifi " assertion.
- assertion comment " Recording of @UnfoldResearch lightning talk from @aimos_inc 2022 conference of the Association for Interdisciplinary Metaresearch & Open Science https://www.youtube.com/watch?v=gjFsx7zZU2Y&list=PLx5ctDpyYVLFT2jDVQDp7v1TURdcrfgB8&index=5&t=1103s " assertion.
- assertion comment " Unfold Research browser extension is now available for download! Go to https://unfoldresearch.com/ and start using it today, completely free! Here's how it can help you do your research better and discover new things π§΅ Whether you're on a web page or reading a PDF of a paper, Unfold can be used to access and share any kind of research - reviews, datasets, Q&A, notes, videos,... https://twitter.com/UnfoldResearch/status/1560995114536910850/photo/1 Simply go to the page where your paper or preprint is located, and link those extra materials as micropublications. Now, anyone who visits that page will be able to find them and access them! It's right there, a single glimpse away! π https://twitter.com/UnfoldResearch/status/1560995150331002883/video/1 π All the things that usually don't end up in the paper, such as intermediate datasets, interactive visualizations and demos, author notes, videos, and slides... now have a home, and can be permanently linked directly to your paper and easily discovered by others. Have you ever wondered if there is some Twitter thread about a certain paper that summarises the paper in just a few nice, short sentences? Or if there is a Github repo with software implementation of the methods from the paper? Now it's so easy to find out! Or to share your own! https://twitter.com/UnfoldResearch/status/1560995180429320192/video/1 π’ You can also share all the notes, slides and videos from a conference, in a single place that everybody knows about - right on the web pages of the conference itself! You don't have to wander and frantically gather all the materials anymore. https://twitter.com/UnfoldResearch/status/1560995220849922048/video/1 π You have the full power of Markdown and KaTeX to format your text and communicate things clearly. https://twitter.com/UnfoldResearch/status/1560995243528421376/video/1 π You can upload files and manage them within folders. This is an excellent way to share datasets or image and video files, and connect them directly to the paper, in full detail and bigger resolution. https://twitter.com/UnfoldResearch/status/1560995288529215489/video/1 π Stay up-to-date on new research by following micropubs and you'll get an in-app notification for any new reply that's submitted. You can also follow other people's accounts as well and see what are they posting! https://twitter.com/UnfoldResearch/status/1560995294862622728/photo/1 π/π All publications can be upvoted or downvoted - this will help your peers know what deserves more of their attention and could be useful. Upvotes that your contributions collect over time also help it rank better and be more prominently displayed. https://twitter.com/UnfoldResearch/status/1560995330249920513/video/1 π For sharing your experience and opinions in more detail, you can leave reviews; and link to your experiments and results. These conversations can continue as others are also able to reply, ask additional questions and review your review etc. Science is a collective effort! https://twitter.com/UnfoldResearch/status/1560995366492848129/video/1 βοΈ Important things can be saved to your Library, so that you can return to them later. You can also bookmark interesting profiles as well! https://twitter.com/UnfoldResearch/status/1560995390006206464/video/1 We hope that these new tools will help you overcome some of the obstacles that are present when we all try to put Open Science to practice. These new capabilities provide a way to contribute and discover related things anywhere and at any time, keep research close together and linked, always within the same community of people so you don't have to reset your progress every time you start with new interests. https://twitter.com/UnfoldResearch/status/1560995397220372480/photo/1 You can now overcome limitations regarding collaboration between different paper servers, different communities for different research disciplines, and find content that search engines cannot easily point you to. You are free to choose where the content is coming from, for we recognize that many other platforms provide amazing solutions to the problems that they are solving specifically, and we want to bring all of that work closely together and make it easier to discover. We aim to provide a tool that you can use without even thinking about it, and whenever there is something that might be useful, we'll let you know - your own research companion! We couldn't be more excited about the things that we have planned, that will continue to enrich your experience while doing research and keep making it easier and easier, so, stay tuned! π€ To get started, visit https://unfoldresearch.com/ and let us know what you think! Happy unfolding, scholars! π " assertion.
- assertion comment " Unfold Research browser extension is now available for download! Go to https://unfoldresearch.com/ and start using it today, completely free! Here's how it can help you do your research better and discover new things π§΅ Whether you're on a web page or reading a PDF of a paper, Unfold can be used to access and share any kind of research - reviews, datasets, Q&A, notes, videos,... https://twitter.com/UnfoldResearch/status/1560995114536910850/photo/1 Simply go to the page where your paper or preprint is located, and link those extra materials as micropublications. Now, anyone who visits that page will be able to find them and access them! It's right there, a single glimpse away! π https://twitter.com/UnfoldResearch/status/1560995150331002883/video/1 π All the things that usually don't end up in the paper, such as intermediate datasets, interactive visualizations and demos, author notes, videos, and slides... now have a home, and can be permanently linked directly to your paper and easily discovered by others. Have you ever wondered if there is some Twitter thread about a certain paper that summarises the paper in just a few nice, short sentences? Or if there is a Github repo with software implementation of the methods from the paper? Now it's so easy to find out! Or to share your own! https://twitter.com/UnfoldResearch/status/1560995180429320192/video/1 π’ You can also share all the notes, slides and videos from a conference, in a single place that everybody knows about - right on the web pages of the conference itself! You don't have to wander and frantically gather all the materials anymore. https://twitter.com/UnfoldResearch/status/1560995220849922048/video/1 π You have the full power of Markdown and KaTeX to format your text and communicate things clearly. https://twitter.com/UnfoldResearch/status/1560995243528421376/video/1 π You can upload files and manage them within folders. This is an excellent way to share datasets or image and video files, and connect them directly to the paper, in full detail and bigger resolution. https://twitter.com/UnfoldResearch/status/1560995288529215489/video/1 π Stay up-to-date on new research by following micropubs and you'll get an in-app notification for any new reply that's submitted. You can also follow other people's accounts as well and see what are they posting! https://twitter.com/UnfoldResearch/status/1560995294862622728/photo/1 π/π All publications can be upvoted or downvoted - this will help your peers know what deserves more of their attention and could be useful. Upvotes that your contributions collect over time also help it rank better and be more prominently displayed. https://twitter.com/UnfoldResearch/status/1560995330249920513/video/1 π For sharing your experience and opinions in more detail, you can leave reviews; and link to your experiments and results. These conversations can continue as others are also able to reply, ask additional questions and review your review etc. Science is a collective effort! https://twitter.com/UnfoldResearch/status/1560995366492848129/video/1 βοΈ Important things can be saved to your Library, so that you can return to them later. You can also bookmark interesting profiles as well! https://twitter.com/UnfoldResearch/status/1560995390006206464/video/1 We hope that these new tools will help you overcome some of the obstacles that are present when we all try to put Open Science to practice. These new capabilities provide a way to contribute and discover related things anywhere and at any time, keep research close together and linked, always within the same community of people so you don't have to reset your progress every time you start with new interests. https://twitter.com/UnfoldResearch/status/1560995397220372480/photo/1 You can now overcome limitations regarding collaboration between different paper servers, different communities for different research disciplines, and find content that search engines cannot easily point you to. You are free to choose where the content is coming from, for we recognize that many other platforms provide amazing solutions to the problems that they are solving specifically, and we want to bring all of that work closely together and make it easier to discover. We aim to provide a tool that you can use without even thinking about it, and whenever there is something that might be useful, we'll let you know - your own research companion! We couldn't be more excited about the things that we have planned, that will continue to enrich your experience while doing research and keep making it easier and easier, so, stay tuned! π€ To get started, visit https://unfoldresearch.com/ and let us know what you think! Happy unfolding, scholars! π " assertion.
- assertion comment " Scaling laws don't care about scale of the "train" models? Did anyone else get this? When I predict a scaling law, the scale of the largest model matters, but the num-models for fitting matters much much much more. Initial results, scaling error by #models starting from largest https://twitter.com/LChoshen/status/1803401845626511568/photo/1 Maybe more simply put: You can predict a scaling law with 8 small models, and it would be better than 3 large ones (that costs a lot) Is that something anyone else seen? " assertion.
- assertion comment " Following this initiative for a time. Go open science #academic #academics #openscience #academicChatter https://twitter.com/rtk254/status/1803100275990794566 " assertion.
- assertion comment " Memorizing the exam questions, but with GPUs https://twitter.com/kzSlider/status/1834321011074105819 " assertion.
- assertion comment " Dear funders, You could spend "orders of magnitude larger" to do supervised, massively curated teaching of LLMs. Or you could spend a fraction of that so I can scale probabilistic programming with predictive coding for inference. Your choice! https://twitter.com/rm_rafailov/status/1834312017764913212 " assertion.
- assertion comment " Modern culture giving us βthe freedom to choose what is always the same.β Really enjoyed this discussion between @OshanJarow and Christian Arnsperger https://www.oshanjarow.com/podcasts/emancipatory-social-science-with-christian-arnsperger https://twitter.com/rtk254/status/1835327248024432655/photo/1 Also on being in the system while being outside of it: https://twitter.com/rtk254/status/1835327624287056025/photo/1 "The freedom to choose what's always the same" could also be said about our digital spaces. Lots of resonance with the theme of "rewilding the internet" https://x.com/robinberjon/status/1814321721673121973 " assertion.
- assertion comment " Modern culture giving us βthe freedom to choose what is always the same.β Really enjoyed this discussion between @OshanJarow and Christian Arnsperger https://www.oshanjarow.com/podcasts/emancipatory-social-science-with-christian-arnsperger https://twitter.com/rtk254/status/1835327248024432655/photo/1 Also on being in the system while being outside of it: https://twitter.com/rtk254/status/1835327624287056025/photo/1 "The freedom to choose what's always the same" could also be said about our digital spaces. Lots of resonance with the theme of "rewilding the internet" https://x.com/robinberjon/status/1814321721673121973 " assertion.
- assertion comment " In my conversation with David Rothkopf (@djrothkopf) we discuss some challenges of AI and privacy -- starting with some key points from our recent ACM tech policy @USTPC brief. Many thanks to David/ DSR Network for hosting: https://thedsrnetwork.com/ai-and-the-challenge-of-data-privacy?utm_source=dlvr.it&utm_medium=twitter " assertion.
- assertion comment " Our new paper proposes a path beyond 'guardrails' and self-regulaation for Generative AI and Trustworthy, Open and Equitable science: https://mit-genai.pubpub.org/pub/s793ca1i/release/1?readingCollection=9070dfe7 " assertion.
- assertion comment " βWhat I am suggesting here is that there is no clean way of separating the epistemic goals of scientists and research projects from the social and political uses that science serves.β π https://twitter.com/rtk254/status/1835487623193772349/photo/1 https://www.taylorfrancis.com/chapters/edit/10.4324/9781003315032-34/epistemic-diversity-ignorance-non-ideal-philosophy-science-quill-kukla " assertion.
- assertion comment " βWhat I am suggesting here is that there is no clean way of separating the epistemic goals of scientists and research projects from the social and political uses that science serves.β π https://twitter.com/rtk254/status/1835487623193772349/photo/1 https://www.taylorfrancis.com/chapters/edit/10.4324/9781003315032-34/epistemic-diversity-ignorance-non-ideal-philosophy-science-quill-kukla " assertion.
- assertion comment " Cognitive models of serendipity! Looking forward to reading this π https://twitter.com/rtk254/status/1835705999795130613/photo/1 source: https://journals.sagepub.com/doi/10.1177/10892680241254759 " assertion.
- assertion comment " Are there any brave souls that have tried to make Supersize Me but for AI automation? (same idea, but eating epistemic junk food instead of real junk food) https://twitter.com/rtk254/status/1835305989387563303/photo/1 For more on the relations between junk food and epistemic junk food https://x.com/rtk254/status/1667868407486619652 " assertion.
- assertion comment " Happening now, if you are spontaneous (dozens from all over the world are already here) exciting https://twitter.com/AITinkerers/status/1834704578643681478 " assertion.