<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:wfw="http://wellformedweb.org/CommentAPI/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
><channel><title>science &#8211; Technodite</title><atom:link href="https://technodite.com/tag/science/feed/" rel="self" type="application/rss+xml" /><link>https://technodite.com</link><description>We talk Tech, No BS</description><lastBuildDate>Fri, 25 Aug 2023 09:45:30 +0000</lastBuildDate><language>en-US</language><sy:updatePeriod>hourly</sy:updatePeriod><sy:updateFrequency>1</sy:updateFrequency><generator>https://wordpress.org/?v=6.3.1</generator> <item><title>New Study: Artificial Intelligence Used to Estimate Rice Yields</title><link>https://technodite.com/news/new-study-artificial-intelligence-used-to-estimate-rice-yields/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Fri, 25 Aug 2023 09:45:29 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=537</guid><description><![CDATA[Researchers train convolutional neural network models that can estimate rice yield from analyzing pre-harvest photographs]]></description><content:encoded><![CDATA[<p>A study by researchers from Japan has shown that artificial intelligence (AI) can be used to estimate rice yields. The study, which was published in the journal Plant Phenomics, used ground-based digital images taken at the harvesting stage of the crop, combined with convolutional neural networks (CNNs), to estimate rice yield.</p><p>Convolutional neural networks (CNNs) are a type of feed-forward neural network that learns feature engineering by itself via filters optimization.&nbsp;They are designed to emulate the behavior of a visual cortex and mitigate the challenges posed by the multilayer perception (MLP) architecture by exploiting the strong spatially local correlation present in natural images.</p><h2 class="gb-headline gb-headline-f0c81996 gb-headline-text">Applications</h2><p>CNNs have applications in:</p><ul><li>image and video recognition</li><li>recommender systems</li><li>image classification</li><li>image segmentation</li><li>medical image analysis</li><li>natural language processing</li><li>brain–computer interfaces</li><li>financial time series</li></ul><p>The study was conducted in 20 locations in seven countries. The researchers gathered rice canopy images and rough grain yield data from each location. They then used this data to train a CNN model to estimate rice yield.</p><h2 class="gb-headline gb-headline-42fda8a7 gb-headline-text">Capabilities of the Developed CNN Model</h2><ul><li>The model was able to explain around 68%-69% of yield variation in the validation and test datasets. The researchers say that this is a promising result, as it suggests that AI can be used to accurately estimate rice yields.</li><li>The model was also able to identify the importance of panicles &#8212; loose-branching clusters of flowers &#8212; in yield estimation. The model could predict yield accurately during the ripening stage, recognizing mature panicles, and also detect cultivar and water management differences in yield in the prediction dataset.</li><li>The study&#8217;s findings suggest that AI has the potential to be used to monitor rice productivity at regional scales. </li></ul><p>However, the researchers say that further research is needed to adapt the model to low-yielding and rainy environments.</p><p>The AI-based method has also been made available to farmers and researchers through a simple smartphone application, thus greatly improving accessibility of the technology and its real-life applications. The name of this application is HOJO, which is used for recording the growth of crops. The app&#8217;s ability to link and manage location, date, and time information allows it to support the work of users who want to take growth profiles in expansive fields and field workers who want to compare fields in various places. It is already available on iOS and Android.</p><p>The researchers hope that their work will lead to better management of rice fields and assist accelerated breeding programs, contributing positively to global food production and sustainability initiatives.</p><p></p>]]></content:encoded></item><item><title>MinD-Vis: A Mind-Reading AI or Decoding Human Visual Stimuli from Brain Recordings</title><link>https://technodite.com/news/mind-vis-a-mind-reading-ai-or-decoding-human-visual-stimuli-from-brain-recordings/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Mon, 21 Aug 2023 20:16:06 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=470</guid><description><![CDATA[Singaporean researchers have developed an AI system that can generate an image of what a person is seeing by reading their brain waves.]]></description><content:encoded><![CDATA[<p>MinD-Vis is a groundbreaking framework designed to decode human visual stimuli from brain recordings. Developed to deepen our understanding of the human visual system, MinD-Vis aims to bridge the gap between human and computer vision through the Brain-Computer Interface.</p><p>MinD-Vis represents a significant advancement in the field of Brain-Computer Interface and human vision decoding. By effectively decoding visual stimuli from brain recordings, it provides valuable insights into the workings of the human visual system and paves the way for future research in this area.</p><h2 class="gb-headline gb-headline-681311b4 gb-headline-text">Framework Overview<br>The MinD-Vis framework consists of two main stages1:</h2><p><strong>Sparse-Coded Masked Brain Modeling (SC-MBM):</strong> This stage focuses on modeling the brain using sparse coding techniques. It helps in capturing the essential features and patterns in the brain recordings.</p><p><strong>Double-Conditioned Latent Diffusion Model (DC-LDM):</strong> This stage uses a latent diffusion model conditioned on both the input brain recordings and the output visual stimuli. It helps in generating highly plausible images that match semantically with the input brain recordings.</p><p>By boosting the information capacity of feature representations learned from a large-scale resting-state fMRI dataset, MinD-Vis can reconstruct highly plausible images with semantically matching details from brain recordings using very few paired annotations1.</p><h2 class="gb-headline gb-headline-2f7d5de6 gb-headline-text">Benchmarking and Results</h2><p>MinD-Vis has been benchmarked both qualitatively and quantitatively. The experimental results indicate that MinD-Vis outperforms state-of-the-art methods in both semantic mapping (100-way semantic classification) and generation quality (FID) by 66% and 41% respectively</p><p> For more information go to the official <a rel="noreferrer noopener" href="https://github.com/zjc062/mind-vis" target="_blank">GitHub repository</a> for MinD-Vis, which provides detailed information about the framework, its components, and its performance.</p>]]></content:encoded></item><item><title>Elemental Cognition: A Leap Forward in AI</title><link>https://technodite.com/news/elemental-cognition-a-leap-forward-in-ai/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Fri, 18 Aug 2023 12:06:35 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=459</guid><description><![CDATA[David Ferrucci, the renowned artificial intelligence researcher who led the team that created IBM Watson, has successfully raised nearly $60 million for his AI startup, Elemental Cognition]]></description><content:encoded><![CDATA[<p>Elemental Cognition, the brainchild of renowned artificial intelligence researcher David Ferrucci, is making waves in the AI industry. Ferrucci, who led the team that created IBM Watson, has successfully raised nearly $60 million for his AI startup.</p><h2 class="wp-block-heading">A New Era of AI</h2><p>Located in New York’s historic Helmsley Building, Elemental Cognition is on a mission to develop AI that “thinks before it talks”. The company offers two enterprise products, Cogent and Cora, which are essentially chatbots designed for different scenarios. They can be used in financial services, interactive travel planning, and for automating research discovery in life sciences.</p><h2 class="wp-block-heading">A Stellar Team</h2><p>The company’s leadership team includes other former IBM employees such as Eric Brown and Mike Barborak, who are both vice presidents. Prominent investors and advisors include Jim Breyer, founder and CEO of Breyer Capital and one of the first investors in Facebook, former IBM CEO Sam Palmisano, Geoff Yang from Redpoint Ventures, and Greg Jensen, co-chief investment officer at Bridgewater.</p><h2 class="wp-block-heading">A Bright Future</h2><p>Ferrucci confirmed the financing in an email to CNBC. “With our recent round of funding, Elemental Cognition will continue to capitalize on our efforts to bring reliable reasoning and transparency to the market,” Ferrucci wrote.</p><p>This significant financial boost reflects the escalating interest in artificial intelligence and its vast applications. Elemental Cognition’s edge lies in its unique hybrid AI platform. Unlike models that solely hinge on large language models, Elemental Cognition synergizes these with a cutting-edge reasoning engine. This integration ensures that AI responses are not just intuitive but also well within predefined parameters.</p><p>As we move forward into an increasingly digital age, startups like Elemental Cognition are leading the charge in revolutionizing how we interact with technology. With its unique approach to AI development and a strong team at its helm, Elemental Cognition is poised to make significant strides in the field of artificial intelligence.</p>]]></content:encoded></item><item><title>Chatbots as a Tool for Improving Physical Activity, Diet, and Sleep</title><link>https://technodite.com/news/chatbots-as-a-tool-for-improving-physical-activity-diet-and-sleep/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Fri, 18 Aug 2023 11:18:56 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Health]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=456</guid><description><![CDATA[While receiving support from health professionals like doctors and dietitians can effectively improve healthy behaviors, more cost-effective and feasible interventions are needed to reduce rates of physical inactivity, unhealthy eating, and insufficient sleep.]]></description><content:encoded><![CDATA[<p>A group of scientists at the Alliance for Research in Exercise Nutrition and Activity (ARENA), University of South Australia, Adelaide, SA, Australia carried out a systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviors.</p><p>A systematic review and meta-analysis of 19 trials has found that chatbot interventions are efficacious for improving physical activity, diet quality, sleep duration, and quality. The trials included a total of 2,547 participants aged 9 to 71 years. Most interventions (79%) targeted physical activity, and most trials were of low quality.</p><h2 class="gb-headline gb-headline-2048bebb gb-headline-text">Outcomes of interest</h2><p>Outcomes of interest were </p><ol><li> total physical activity (any measure of low, moderate and/or vigorous intensity physical activity reported as a duration, e.g., minutes per day or week), </li><li> moderate-to-vigorous physical activity only (MVPA, minutes/week), (</li><li> daily steps, </li><li> fruit and vegetable consumption, </li><li> sleep quality and </li><li>sleep duration.</li></ol><h2 class="gb-headline gb-headline-da6e4122 gb-headline-text">Findings</h2><p>The review found that chatbot interventions significantly increased total physical activity, steps, moderate-to-vigorous physical activity (MVPA), fruit and vegetable consumption, sleep duration, and sleep quality. Text-based and artificial intelligence (AI) chatbots were more effective than voice chatbots for diet. Multicomponent interventions (i.e., those that included other interventions in addition to the chatbot) were more effective than chatbot-only interventions for sleep.</p><p>The review concluded that chatbot interventions are efficacious for improving physical activity, diet quality, sleep duration, and quality across populations, age groups, durations, and as standalone or part of multicomponent interventions. The review also found that chatbot interventions were more effective than other interventions, such as text messaging and telephone calls.</p><p>The findings of this review suggest that chatbots can be a promising tool for improving health behaviors. Chatbots are convenient, accessible, and can be tailored to individual needs. They can also provide personalized feedback and support. Future research should focus on developing high-quality chatbot interventions and evaluating their long-term effectiveness.</p><h2 class="gb-headline gb-headline-6c5820f5 gb-headline-text">Additional Points</h2><ul><li>The review found that chatbot interventions were more effective for people who were less motivated to change their health behaviors.</li><li>Chatbots can be used to provide information, reminders, and encouragement.</li><li>Chatbots can also be used to track progress and provide feedback.</li><li>Chatbots can be used to connect people with other resources, such as fitness classes or community support groups.</li></ul><p>Source: <a href="https://www.nature.com/articles/s41746-023-00856-1#Abs1">Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours</a></p>]]></content:encoded></item><item><title>GedankenNet: An AI Model That Does Not Need Training</title><link>https://technodite.com/news/gedankennet-an-ai-that-actually-thinks/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Wed, 16 Aug 2023 08:27:05 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Machine learning]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=423</guid><description><![CDATA[A new self-supervised deep learning model called GedankenNet for hologram reconstruction. It is trained without any experimental data, only using randomly generated synthetic images.
]]></description><content:encoded><![CDATA[<p>A new AI-based approach for computational imaging and microscopy that does not require any experimental objects or real data has been developed by researchers from the UCLA Samueli School of Engineering.</p><p>The approach, called GedankenNet, is a self-supervised AI model that learns from the principles of physics and mental experiments. The model uses only the universal laws of physics that describe how electromagnetic waves travel in space to reconstruct microscopic images from random artificial holograms &#8212; generated entirely from &#8216;imagination&#8217; without depending on any real-world experiments, actual sample similarities or real data.</p><p>GedankenNet, coming from the German word for thought, is new self- supervised deep learning model for hologram reconstruction. Hologram reconstruction in machine learning refers to the use of machine learning techniques to reconstruct images from raw holograms. What makes GedankenNet special is that it does not need labeled or experimental training data.</p><h2 class="gb-headline gb-headline-e7bdf127 gb-headline-text">In a Nutshell</h2><p>&#8211; GedankenNet eliminates the need for large labeled training datasets and generalizes well to reconstructing experimental holograms of various tissue samples, even though it never saw real samples during training.</p><p>&#8211; The key innovation is training GedankenNet to match the input holograms by predicting holograms from its outputs using physics-based forward models, not relying on ground truth sample images.</p><p>&#8211; This physics-consistency loss encodes the wave propagation physics into the network, making its outputs compatible with Maxwell&#8217;s equations.</p><p>&#8211; GedankenNet showed superior generalization compared to supervised learning models trained on the same synthetic datasets. It also outperformed iterative phase retrieval algorithms.</p><p>&#8211; GedankenNet successfully reconstructed both the amplitude and phase images of various tissue types like lung, prostate, kidney from experimental holograms, despite training only on random synthetic data.</p><p>&#8211; The network can digitally autofocus defocused holograms and showed resilience to unknown shifts in parameters like pixel size and wavelength.</p><p>&#8211; This self-supervised deep learning approach eliminates needs for large labeled training datasets and has broad applicability for computational imaging and microscopy.</p><p>Source: <a href="https://www.nature.com/articles/s42256-023-00704-7">Self-supervised learning of hologram reconstruction using physics consistency | Nature Machine Intelligence</a></p>]]></content:encoded></item><item><title>Researchers Find Quantum Material Capable of Mimicking Brain Function</title><link>https://technodite.com/news/quantum-material-mimics-brain-function/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Tue, 15 Aug 2023 17:36:29 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Neuromorphic systems]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=413</guid><description><![CDATA[The researchers from Q-MEEN-C discovered that a quantum material called samarium hexaboride (SmB6) can exhibit non-locality when stimulated by electrical pulses.]]></description><content:encoded><![CDATA[<p>A recent article from ScienceDaily reports on a breakthrough in quantum materials research that could lead to more energy-efficient computing.</p><p>The article is titled &#8220;<a href="https://www.sciencedaily.com/releases/2023/08/230808110939.htm">Quantum material exhibits &#8216;non-local&#8217; behavior that mimics brain function: New research shows a possible way to improve energy-efficient computing</a>&#8221; and was published on August 8, 2023.</p><p>The article describes the work of a consortium called Q-MEEN-C, led by the University of California San Diego, that aims to create brain-like computers using quantum materials. Quantum materials are substances that exhibit unusual properties at the atomic scale, such as superconductivity, magnetism, and topological phases.</p><p>One of the challenges of creating brain-like computers is to replicate the non-local interactions that occur in the brain. Non-locality means that stimuli applied to one part of a system can affect another part that is not directly connected. For example, in the brain, electrical signals can travel between distant neurons and synapses, enabling complex information processing.</p><p>The researchers from Q-MEEN-C discovered that a quantum material called samarium hexaboride (SmB6) can exhibit non-locality when stimulated by electrical pulses. They created an array of electrodes on top of a thin film of SmB6 and measured the resistance changes between them. They found that stimulating one pair of electrodes could also affect the resistance of another pair that was not adjacent.</p><p>This non-local behavior mimics the brain function and could enable new types of devices that perform neuromorphic computing. Neuromorphic computing is a paradigm that uses analog circuits and architectures inspired by the brain to perform tasks such as pattern recognition, learning, and memory.</p><p>The researchers believe that SmB6 is not the only quantum material that can exhibit non-locality and plan to explore other candidates in the future. They also hope to scale up their experiments to create larger arrays of electrodes and devices that can perform more complex functions.</p><p>The article concludes by highlighting the potential applications and benefits of neuromorphic computing using quantum materials. These include faster, more accurate, and more energy-efficient data processing, as well as new insights into the physics of quantum materials and the biology of the brain.</p>]]></content:encoded></item><item><title>Chinese quantum computer beats Google&#8217;s again</title><link>https://technodite.com/news/chinese-quantum-computer-beats-googles-again/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Tue, 06 Jul 2021 18:10:47 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=65</guid><description><![CDATA[A Chinese research team has surpassed Google, building a quantum computer that completed a calculation in just over an hour that would take classical computers more than eight years to perform. In recent years researchers around the Globe have finally reached the ‘quantum advantage’ – the point at which quantum computing can solve a problem ... <a title="Chinese quantum computer beats Google&#8217;s again" class="read-more" href="https://technodite.com/news/chinese-quantum-computer-beats-googles-again/" aria-label="More on Chinese quantum computer beats Google&#8217;s again">Read more</a>]]></description><content:encoded><![CDATA[<p>A Chinese research team has surpassed Google, building a quantum computer that completed a calculation in just over an hour that would take classical computers more than eight years to perform.</p><p>In recent years researchers around the Globe have finally reached the ‘quantum advantage’ – the point at which quantum computing can solve a problem that normal computers would need years to solve.</p><p>A team from Google first achieved the milestone in 2019 using superconducting qubits to achieve quantum supremacy. The theoretical basis for these achievements depends on sampling the output <a href="https://arxiv.org/abs/2007.07872">distributions of random quantum circuits.</a></p><p>The following year a team from China managed to trump their time by using photonic qubits.</p><p>The same researcher that beat Google the first time – Jian-Wei Pan at the University of Science and Technology of China in Shanghai, has out performed Google again.</p><p>The problem solved this round was around 100 times more challenging than the one solved by Google&#8217;s Sycamore processor in 2019 and the technology used was different.</p><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><p><br></p>]]></content:encoded></item><item><title>Scientists achieved highest image resolution with a technique that could help develop better batteries</title><link>https://technodite.com/news/scientists-achieved-highest-image-resolution-with-a-technique-that-could-help-develop-better-batteries/</link><dc:creator><![CDATA[Cray Zephyr]]></dc:creator><pubDate>Tue, 29 Jun 2021 11:41:44 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[science]]></category><guid isPermaLink="false">https://technodite.com/?p=31</guid><description><![CDATA[Cornell University researchers captured a sample from a crystal in three dimensions and magnified it 100 million times.]]></description><content:encoded><![CDATA[<p>Cornell University researchers captured a sample from a crystal in three dimensions and magnified it 100 million times.</p><p>Their work could help develop materials for designing more powerful and efficient phones, computers and other electronics.</p><p>The researchers obtained the image using a technique called electron ptychography. It involves shooting a beam of electrons, about a billion of them per second, at a target material. The beam moves infinitesimally as the electrons are fired, so they hit the sample from slightly different angles each time. Based on the speckle pattern generated by the electrons, machine-learning algorithms can calculate where the atoms were in the sample and what their shapes might be, the researchers say.</p><p>Cornell physicist David Muller says they figured out how to reconstruct two-dimensional samples with the technique, which resulted in the highest-resolution image by any method in the world and a Guinness World record. The researchers were able to better preserve their samples by using a lower-energy wavelength.</p><p>The next generation of electronic devices need such high-resolution techniques. Researchers are searching for more efficient semiconductors in order to move beyond Silicon-based computer chips. Engineers need to know what they are working with to make this happen. It is important for the transition from fossil fuels to renewable energy because batteries are a promising area for using electron ptychography.</p><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div><div class="wp-block-cgb-block-sishemi-gutenberg-post-schema">ww</div>]]></content:encoded></item></channel></rss>