<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:wfw="http://wellformedweb.org/CommentAPI/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
><channel><title>Kaan Tanimore &#8211; Technodite</title><atom:link href="https://technodite.com/author/kaan-tanimore/feed/" rel="self" type="application/rss+xml" /><link>https://technodite.com</link><description>We talk Tech, No BS</description><lastBuildDate>Wed, 23 Aug 2023 10:04:07 +0000</lastBuildDate><language>en-US</language><sy:updatePeriod>hourly</sy:updatePeriod><sy:updateFrequency>1</sy:updateFrequency><generator>https://wordpress.org/?v=6.3</generator> <item><title>How AI Brings Greater Accuracy, Speed, and Scale to Microsegmentation</title><link>https://technodite.com/insights/how-ai-brings-greater-accuracy-speed-and-scale-to-microsegmentation/</link><dc:creator><![CDATA[Kaan Tanimore]]></dc:creator><pubDate>Wed, 23 Aug 2023 10:04:06 +0000</pubDate><category><![CDATA[Insights]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Cybersecurity]]></category><guid isPermaLink="false">https://technodite.com/?p=505</guid><description><![CDATA[AI and ML can help with microsegmentation by automating policy creation and enforcement, discovering and mapping workloads and dependencies, adapting to dynamic changes in the environment, and providing visibility and analytics.
]]></description><content:encoded><![CDATA[<p>Microsegmentation is a security strategy that divides a network into small, isolated segments. This makes it more difficult for attackers to move laterally within the network if they are able to breach one segment. Microsegmentation is a key component of zero trust security, which is a security model that assumes that no user or device is trusted by default.</p><p>AI and machine learning (ML) can be used to improve the accuracy, speed, and scale of microsegmentation. AI can be used to automate the process of creating and enforcing security policies, as well as discovering and mapping workloads and dependencies. ML can be used to learn the behavior of the network and identify anomalies that could indicate a security breach.</p><h2 class="gb-headline gb-headline-b36a84de gb-headline-text">Fields of Use</h2><p>Here are some of the ways that AI and ML can be used to improve microsegmentation:</p><ul><li><strong>Automating policy creation and enforcement:</strong>&nbsp;AI can be used to automate the process of creating and enforcing security policies. This can help to reduce the time and effort required to implement microsegmentation, and it can also help to ensure that policies are consistently applied across the network.</li><li><strong>Discovering and mapping workloads and dependencies:</strong>&nbsp;AI can be used to discover and map workloads and dependencies in the network. This information can be used to create more granular security policies that are tailored to the specific needs of each workload.</li><li><strong>Adapting to dynamic changes in the environment:</strong>&nbsp;AI can be used to adapt to dynamic changes in the environment. For example, if a new workload is added to the network, AI can be used to automatically create a security policy for that workload.</li><li><strong>Providing visibility and analytics:</strong>&nbsp;AI can be used to provide visibility and analytics into the network. This information can be used to identify anomalies that could indicate a security breach.</li></ul><p>There are a number of AI and ML solutions for microsegmentation available from vendors such as Illumio, Zscaler, VMware, Cisco, and Guardicore. These solutions can help organizations to improve the accuracy, speed, and scale of their microsegmentation efforts.</p><h2 class="gb-headline gb-headline-663962a7 gb-headline-text">Advantages</h2><p>Here are some of the benefits of using AI and ML for microsegmentation:</p><ul><li>Increased accuracy: AI can help to ensure that security policies are accurately applied to the network. This can help to reduce the risk of security breaches.</li><li>Increased speed: AI can automate the process of creating and enforcing security policies, which can help to speed up the implementation of microsegmentation.</li><li>Increased scale: AI can be used to scale microsegmentation to large, complex networks.</li><li>Reduced costs: AI can help to reduce the cost of implementing and managing microsegmentation.</li></ul><p>AI and ML can be a valuable tool for improving the accuracy, speed, and scale of microsegmentation. By automating the process of creating and enforcing security policies, as well as discovering and mapping workloads and dependencies, AI can help organizations to more effectively protect their networks from cyberattacks.</p>]]></content:encoded></item><item><title>Unraveling the Recent Bitcoin Price Drop</title><link>https://technodite.com/news/unraveling-the-recent-bitcoin-price-drop/</link><dc:creator><![CDATA[Kaan Tanimore]]></dc:creator><pubDate>Fri, 18 Aug 2023 12:23:27 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[bitcoin]]></category><category><![CDATA[blockchain]]></category><category><![CDATA[cryptocurrency]]></category><guid isPermaLink="false">https://technodite.com/?p=462</guid><description><![CDATA[Bitcoin, the world’s largest cryptocurrency by market capitalization, has recently experienced a significant price drop. This article aims to shed light on the potential reasons behind this sudden market movement.]]></description><content:encoded><![CDATA[<p>Bitcoin, the world’s largest cryptocurrency by market capitalization, has recently experienced a significant price drop. This article aims to shed light on the potential reasons behind this sudden market movement.</p><h2 class="wp-block-heading">SpaceX’s Bitcoin Holdings</h2><p>One of the factors that might have contributed to the drop in Bitcoin’s price is a report suggesting that SpaceX may have sold some or all of its $373 million in Bitcoin holdings. This news could have put pressure on the price of Bitcoin.</p><h2 class="wp-block-heading">Interest Rate Fears</h2><p>The broader markets’ expectations of future interest rate hikes from the U.S. Federal Reserve could have contributed to the drop. Higher interest rates can lower the returns on riskier assets, leading to a pullback.</p><h2 class="wp-block-heading">Government Bond Yields</h2><p>The recent rise in government bond yields could have reduced liquidity for the broader market, leading to a sell-off.</p><h2 class="wp-block-heading">Large Sell Orders</h2><p>The sudden move down could have resulted from a single large actor making a big sell, which then resulted in further pressure on derivatives.</p><h2 class="wp-block-heading">Market Volatility</h2><p>Cryptocurrencies are consistently volatile and are affected by a massive market for derivatives. There are on average five times more Bitcoin derivative trades than spot trades of the coin itself. These bets can shift the price of cryptocurrencies, adding to volatility.</p><p>While these are potential reasons, it’s important to note that the exact cause can vary based on a multitude of factors. The world of cryptocurrencies is complex and ever-changing, making it both exciting and challenging for investors and enthusiasts alike.</p>]]></content:encoded></item><item><title>What is Neuromorphic Computing?</title><link>https://technodite.com/insights/what-is-neuromorphic-computing/</link><dc:creator><![CDATA[Kaan Tanimore]]></dc:creator><pubDate>Tue, 15 Aug 2023 12:02:43 +0000</pubDate><category><![CDATA[Insights]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Neuromorphic systems]]></category><guid isPermaLink="false">https://technodite.com/?p=410</guid><description><![CDATA[Neuromorphic computing is the design and engineering of computing systems inspired by the human brain.]]></description><content:encoded><![CDATA[<p>Neuromorphic computing seeks to mimic the neural structure and operation of the human brain. It involves designing computer chips that work similarly to neurons and synapses in the brain. The goal is to create more efficient computing systems that can solve complex problems like pattern recognition and natural language processing.</p><h2 class="wp-block-heading">Origins</h2><p>&#8211; The concept of neuromorphic computing was first introduced in the 1980s by Carver Mead, a professor at Caltech. He coined the term &#8220;neuromorphic&#8221; to describe the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures.</p><p>&#8211; In the 1990s, Mead and his colleagues designed early neuromorphic chips that implemented models of the retina, cochlea, and other sensory systems. However, these early systems were limited in complexity due to the immaturity of chip manufacturing at the time.</p><h2 class="wp-block-heading">Current State of Research</h2><p>&#8211; In recent years, advances in VLSI technology have enabled the creation of more sophisticated neuromorphic chips with millions of artificial neurons and synapses. Major technology firms like IBM and Intel have active neuromorphic computing research projects.</p><p>&#8211; In 2014, IBM unveiled its TrueNorth chip that has 1 million programmable neurons and 256 million synapses. It is able to run pattern recognition tasks at much lower power than conventional CPUs or GPUs.</p><p>&#8211; In 2017, Intel introduced Loihi, a neuromorphic chip with 130,000 neurons and 130 million synapses. Loihi is aimed at real-time processing of adaptive and autonomous applications like robotics.</p><p>&#8211; Universities and research labs around the world are also developing custom neuromorphic chips for different applications, from self-driving cars to medical diagnostics. However, there are still many challenges to overcome before neuromorphic systems can rival biological brains.</p><h2 class="wp-block-heading">Applications</h2><p>One of the main application areas being explored for neuromorphic chips is machine learning and AI. The low-power event-driven signaling of neuromorphic hardware is well-suited for deep learning models and algorithms.</p><p>Neuromorphic systems also hold promise for real-time sensory processing and situation analysis for autonomous robots and vehicles. The spiking neural networks allow for efficient processing of visual, auditory and spatial data.</p><p>Other potential applications include data filtering, pattern recognition for medical diagnosis, financial analysis, social behavioral modeling, etc.</p><h2 class="wp-block-heading">Challenges</h2><p>A key challenge is scaling up neuromorphic systems to match the complexity of biological neural networks which have billions of neurons. Most existing neuromorphic chips only have thousands to millions of artificial neurons.</p><p>There are also challenges in programming the desired functions and learning rules into the neuromorphic chips. Most existing systems require hand-tuning of the synaptic connections which is not practical for larger networks.</p><p>Integrating the neuromorphic chips with traditional von Neumann architectures and dataflow is also an area of active research.</p><h2 class="wp-block-heading">Startups and Industry Adoption</h2><p>In addition to projects at IBM, Intel and universities, many technology startups are emerging around neuromorphic computing, such as BrainChip, General Vision, and SynSense.</p><p>Large companies like Qualcomm, Samsung, and Bosch are investing in and partnering with neuromorphic startups to eventually bring neuromorphic processors to consumer devices.</p><p>Industry adoption is still in early phases. It may take 5-10 more years of R&amp;D before neuromorphic chips begin displacing conventional CPUs for specialized applications. But the potential for low-power intelligence is driving rapid growth and investment in this field.</p>]]></content:encoded></item><item><title>Tokenization of Real-World Assets: A Promising Area for Crypto Regulation</title><link>https://technodite.com/insights/tokenization-of-real-world-assets-a-promising-area-for-crypto-regulation/</link><dc:creator><![CDATA[Kaan Tanimore]]></dc:creator><pubDate>Mon, 14 Aug 2023 13:30:57 +0000</pubDate><category><![CDATA[Insights]]></category><category><![CDATA[blockchain]]></category><category><![CDATA[DeFi]]></category><guid isPermaLink="false">https://technodite.com/?p=387</guid><description><![CDATA[Tokenizing RWA means creating blockchain tokens that represent real-world assets. This can apply to many types of assets, such as property, art, or IP.]]></description><content:encoded><![CDATA[<p>The tokenization of real-world assets (RWA) is a process of representing ownership of real-world assets on a blockchain. This can be done for a variety of assets, including real estate, art, and even intellectual property.</p><p>RWA tokenization has the potential to bring more transparency and efficiency to financial markets. For example, it could make it easier for investors to track the ownership and value of real estate assets. It could also make it easier for businesses to raise capital by tokenizing their assets.</p><p>However, it is important to ensure that RWA tokenization is properly regulated to prevent fraud and abuse. Regulators need to understand the risks and benefits of RWA tokenization in order to develop clear and fair regulations.</p><h2 class="gb-headline gb-headline-7720b973 gb-headline-text"><strong>The Benefits of RWA Tokenization</strong></h2><p>There are a number of potential benefits to RWA tokenization, including:</p><ul><li><strong>Increased transparency:</strong> RWA tokenization can make it easier to track the ownership and value of real-world assets. This can be beneficial for investors, businesses, and regulators.</li><li><strong>Increased efficiency:</strong> RWA tokenization can make it easier to trade real-world assets. This can reduce transaction costs and make it easier for investors to access these assets.</li><li><strong>New investment opportunities:</strong> RWA tokenization can create new investment opportunities for investors. This is because it can make it possible for investors to own fractional shares of real-world assets.</li><li><strong>Access to capital:</strong> RWA tokenization can make it easier for businesses to raise capital. This is because it can allow businesses to tokenize their assets and sell them to investors.</li></ul><h2 class="gb-headline gb-headline-814244dd gb-headline-text"><strong>The Risks of RWA Tokenization</strong></h2><p>There are also a number of potential risks associated with RWA tokenization, including:</p><ul><li><strong>Fraud:</strong> RWA tokenization could be used to facilitate fraud. For example, scammers could create fake tokens that represent ownership of real-world assets that do not exist.</li><li><strong>Market manipulation:</strong> RWA tokenization could be used to manipulate markets. For example, a large investor could buy up a large number of tokens for a particular asset and then drive up the price of the token.</li><li><strong>Security risks:</strong> RWA tokenization could expose investors to security risks. For example, if a hacker were to gain access to a blockchain, they could steal the tokens that represent ownership of real-world assets.</li></ul><h2 class="gb-headline gb-headline-7b4b35b3 gb-headline-text"><strong>The Future of Crypto Regulation</strong></h2><p>The future of crypto regulation will depend on a number of factors, including:</p><ul><li>The rise of decentralized finance (DeFi): DeFi is a financial system that is built on top of blockchain technology. It is not subject to traditional financial regulations, which could make it a target for fraud and abuse.</li><li>The adoption of cryptocurrencies by mainstream businesses: If cryptocurrencies are adopted by mainstream businesses, regulators will need to develop regulations that protect consumers and prevent fraud.</li><li>The evolution of the crypto ecosystem: The crypto ecosystem is constantly evolving. New technologies and applications are being developed all the time. Regulators will need to keep up with these changes in order to develop effective regulations.</li></ul><h2 class="gb-headline gb-headline-4bbace59 gb-headline-text"><strong>Conclusion</strong></h2><p>RWA tokenization is a promising area for crypto regulation. However, it is important to ensure that RWA tokenization is properly regulated to prevent fraud and abuse. The crypto industry needs to work with regulators to develop clear and fair regulations. The industry should also educate regulators about the benefits of cryptocurrencies and the potential risks.</p>]]></content:encoded></item><item><title>IBM Using Analog AI to Mimick Biological Brains</title><link>https://technodite.com/news/ibm-trying-to-mimick-biological-brains/</link><dc:creator><![CDATA[Kaan Tanimore]]></dc:creator><pubDate>Mon, 14 Aug 2023 08:23:10 +0000</pubDate><category><![CDATA[News]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[IBM]]></category><category><![CDATA[Neuromorphic systems]]></category><guid isPermaLink="false">https://technodite.com/?p=395</guid><description><![CDATA[IBM is working on analog in-memory computing to overcome hardware limitations AI computers are struggling with.]]></description><content:encoded><![CDATA[<p>`<br>Deep neural networks (DNNs) that power foundation models and generative AI are transforming our lives. However, traditional digital computing architectures are not optimal for these models, as they separate memory and processing units. This causes data movement between them, which reduces speed and efficiency. Hardware designed for AI inference can overcome this challenge, but many of them still use this split architecture.</p><p>Analog AI is a new way of computing AI that mimics how the brain works. It uses nanoscale devices called PCM to store and process data as a range of values, not just 0s and 1s. This makes it faster and more energy-efficient than digital AI. However, analog AI faces two main challenges: it needs to be as accurate as digital AI, and it needs to work well with other digital components on the chip.</p><h2 class="gb-headline gb-headline-d41e717b gb-headline-text">A new chip that uses phase-change memory</h2><p>Neural networks are powerful tools for artificial intelligence, but they require a lot of energy and time to process data. One way to overcome this challenge is to use analogue in-memory computing (AIMC), which performs computations directly within the memory where the network weights are stored. This reduces the need to move data around, which saves energy and latency.</p><p>However, AIMC is not enough to achieve end-to-end improvements in performance. It also needs to be combined with on-chip digital operations and communication, as well as robust and scalable memory devices. A team of researchers from IBM and other institutions has developed a multicore AIMC chip that integrates all these components using phase-change memory (PCM) as the memory device.</p><p>PCM is a type of resistive memory that can store multiple levels of information by changing its electrical resistance. It can also perform MVMs by applying currents to the memory cells and measuring the resulting voltage. The researchers designed and fabricated a chip with 64 AIMC cores, each containing 256 × 256 PCM cells, interconnected by an on-chip network. The chip also implements the digital activation functions and additional processing involved in convolutional and recurrent neural networks.</p><p>The chip can achieve near-software-equivalent inference accuracy with ResNet and long short-term memory (LSTM) networks, while performing all the computations associated with the weight layers and the activation functions on the chip. For 8-bit input/output MVMs, the chip can achieve a maximum throughput of 16.1 or 63.1 tera-operations per second (TOPS) at an energy efficiency of 2.48 or 9.76 TOPS per watt (TPW), respectively, depending on the operational mode.</p><p>This work demonstrates that PCM-based AIMC can enable high-performance, low-power and scalable neural network inference on a single chip. It also opens up new possibilities for exploring novel architectures and applications for AIMC.</p><p>You can read the full paper here: https://www.nature.com/articles/s41928-023-01010-1</p>]]></content:encoded></item></channel></rss>