Michel Burger – mimik https://mimik.com YOUR ROI FOR AI Thu, 04 Dec 2025 18:25:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://mimik.com/wp-content/uploads/2025/05/cropped-FavIcon2025Circle-32x32.png Michel Burger – mimik https://mimik.com 32 32 Operationalizing AI Agents on Devices https://mimik.com/operationalizing-ai-agents-on-devices/ Wed, 28 May 2025 15:02:45 +0000 https://mimik.com/?p=88546 How mim OE & MCP Enable Scalable Context-Aware Systems  The era of stateless, cloud-bound AI is over. We’re entering a new paradigm: Agentix-Native systems that will run on endpoint devices and where the software doesn’t just wait for input but actively reasons, interacts, and acts.  To move from prototypes to production, developers need to equip […]

The post Operationalizing AI Agents on Devices first appeared on mimik.

]]>
How mim OE & MCP Enable Scalable Context-Aware Systems

 The era of stateless, cloud-bound AI is over. We’re entering a new paradigm: Agentix-Native systems that will run on endpoint devices and where the software doesn’t just wait for input but actively reasons, interacts, and acts. 

To move from prototypes to production, developers need to equip AI agents running on end devices with: 

  • Persistent memory across interactions 
  • Access to real-time systems and structured data 
  • The ability to invoke tools, not just run inference 
  • Resilience across a fragmented, heterogeneous compute landscape 
  • And most critically: shared context and peer collaboration 

This calls for two foundational layers: 

  • A runtime environment for all devices that brings execution to where context lives 
  • A protocol that standardizes how agents interact with external systems and with each other 

mimik Operating and Execution Environment (mim OE) is the execution foundation, and the Model Context Protocol (MCP) adds optional structure interoperability at the content/conversation level. 


The Room and the Conversation: A Mental Model 

To better understand how these components work together, imagine a meeting room: 

  • mim OE is the room itself: it provides the air, light, space, and seating that allow participants (agents) to be present, visible, and reachable. It ensures that agents know who is there, what’s nearby, and how to reach them. It gives them situational context. 
  • MCP is the conversation that happens in the room: it defines the structured language that agents use to interact. Like a shared vocabulary and grammar, MCP allows agents how to ask questions, understand available functions, request tools, and share information. It provides agent context in conversation/content level and enables meaningful collaboration. 

Together, mim OE ensures agents are aware and operational and then MCP enables structured conversations when needed. 

Visual Overview: mim OE + MCP Architecture 

mim OE provides the secure, local environment where agents are deployed, discover one another, and operate on and near the source of data locally and across distributed systems 

MCP defines how those agents can structure their interaction, dynamically discover each other’s capabilities and understand how to invoke or utilize them. 

Defining Context and Enabling Agent-to-Agent Communication 

MCP, introduced by Anthropic and supported by OpenAI, acts like a shared language layer. While API(s) exposes functions, but MCP adds semantic clarity. MCP is a client-server protocol that allows AI agents (clients) to access and exchange structured context from external systems (servers). It defines a normalized interface for: 

  • Resources: Structured data (e.g., documents, profiles, sensor logs) 
  • Prompts: Instruction templates that shape agent behaviour 
  • Tools: APIs or functions that agents can invoke to act on their environment 

But here’s what makes MCP especially useful: It enables agent-to-agent communication at the conversation level. 

Using MCP, agents can engage in structured dialogue, trigger shared workflows, and collaborate on decision-making: even across different nodes

In the room metaphor, mim OE ensures agents are present, visible, and discoverable, like knowing who’s in the room and being able to walk over and talk to them. MCP is an option that adds the layer of structured conversation, it lets agents understand what others in the room can do, how to ask for it, and how to collaborate using shared meaning. Without MCP, agents must already know how to speak each other’s language. With MCP, they can discover functions and coordinate workflows even if they’ve never met before. It transforms presence into purposeful dialogue. 

And because mim OE can serve as an MCP server as well, it allows local agents to expose MCP-compatible endpoints, making every room distributed and conversational. 

While mim OE is where agents live, discover, and collaborate in context, MCP defines how they describe their capabilities and interact in a structured, portable, and semantically meaningful way 

mim OE is a lightweight, secure runtime that runs on phones, gateways, microcontrollers, drones, vehicles, and cloud VMs, turning them into intelligent, discoverable nodes. 

It includes: 

  • A serverless execution engine 
  • A local service mesh for agent discovery and peer-to-peer coordination 
  • Built-in offline resilience and context persistence 
  • Optional support to act as an MCP server for local tools, prompts, and resources 

With mim OE, agents gain situational awareness of their environment including sensors, nearby agents, local memory and can function autonomously or as part of a larger distributed mesh. 

Comparison: API Gateway vs MCP Server in mim OE 

If mim OE includes both an API gateway and acts as an MCP server, the key difference in usage lies in who is talking to whom, how, and why. Here’s a breakdown: 

Feature API Gateway MCP Server 
Primary Role Routes and exposes REST/gRPC APIs to external consumers Provides structured, standardized access to contextual resources for AI agents 
Audience External apps, services, clients (traditional or AI agents) AI agents (clients) that want to know the language and function of other agents in order to query tools, prompts, or structured resources 
Interaction Style Request-response (generic APIs) or streaming (active APIs) Semantic-driven structured dialogue (agent-to-agent or agent-to-resource) 
Use Case Exposing microservices as APIs (e.g., /Locations, /Configs) Enabling agents to discover functions and semantics in order to talk to tools, prompts, other agents (e.g., get_tool(tool_id)) 
Protocol REST, gRPC, etc. Model Context Protocol (MCP) 
Client Type Any client apps/ solutions/agents at the level of resource exposure AI agents at Semantic level 
Security Context API tokens, headers, ACLs Agent context, identity scopes, structured context boundaries 

Example 

Imagine you’re running a smart home hub using mim OE: 

  • The API Gateway exposes /light/on or /temperature/current to apps or humans. 
  • The MCP Server exposes tools like turn_on_light() or query_temperature() that other agents (e.g., voice assistant, energy optimizer) can use in structured workflows

Analogy (Extending the Room Metaphor) 

  • API Gateway is like a reception desk in the room: humans or external systems walk in and make explicit requests like “turn on the light” or “get the weather.” 
  • MCP Server is like an internal team whiteboard + toolshed: agents inside the room use it to access shared resources, prompts, and tools, and to talk to each other with structure and context. 

API Gateway and MCP: Complementary Interfaces and Roles: 

  • API Gateway is essential for interoperability with both agent and non-agent systems, mobile apps, agents, cloud services, dashboards, and external clients. It exposes REST or gRPC endpoints that allow systems to invoke resources directly, regardless of whether they are agentic. 
  • MCP Server is essential for building AI Agents systems where agents need to reason, discover, and collaborate autonomously. 

They’re complementary: the API gateway makes mim OE useful to the outside world, while MCP makes it powerful for internal agent collaboration. 

Example: Agent Collaboration in a Smart Vehicle 

Imagine a smart vehicle running mim OE: 

  • A driver behavior agent monitors sensors and adapts behavior in real time 
  • A navigation assistant queries an MCP tool for optimized routing 
  • A fleet coordination agent uses MCP to communicate with nearby vehicles 

All three agents are microservices running locally in mim OE. They are aware of each other’s presence (thanks to mim OE), and they speak via MCP, enabling dynamic coordination. 

Why It Matters to Developers 

Without mim OE, agents are isolated, unable to execute flexibly by discovery, or collaborate and coordinate across nodes based on real-time context. There’s no shared environment that enables them to discover, communicate at API level, and act across different systems depending on network, trust, authorization or situational proximity. 

Without MCP, agents may be present but lack a shared structure for understanding each other’s capabilities. Developers must define ahead of time which agents need to talk, what they offer, and how they operate, essentially preloading the “language” and “vocabulary” for interaction. 

With mim OE + MCP, developers get: 

  • A clean architectural separation between runtime execution and interaction semantics (MCP) 
  • On-the-fly, context-aware collaboration across any node—local or remote—based on network visibility, access permissions, and relevance 
  • An open, structured way to enable agent-to-agent conversations without predefined knowledge 
  • Local-first deployment with low latency and privacy control and yet work across clouds. 
  • Cross-node collaboration without centralized dependencies 

Getting Started 

To build real-world Agentic-Native systems: 

  • Deploy mim OE on supported devices (Linux, Android, Windows, iOS, etc.) 
  • Wrap agents as microservices with internal APIs or MCP endpoints 
  • Enable mim OE as an MCP server to host local tools, prompts, and resources 
  • Use MCP clients in agents for structured, interoperable communication 
  • Build for offline-first collaboration, not cloud dependency 

The Future Is Contextual and Conversational 

Agentic-Native AI is not a monolithic cloud model. It’s distributed. It’s local. It’s peer aware. 

With mim OE providing the room, and MCP enabling the conversation, developers can now build intelligent systems that are: 

  • Scalable 
  • Modular 
  • Resilient 
  • Real-time 
  • And most importantly: collaborative 

Agent-based AI isn’t a cloud-centric model, it’s a context-first, conversation-driven, and everywhere-enabled model. 

With MCP powering agent conversations, and mim OE hosting them wherever context lives, developers have everything needed to build the next generation of intelligent systems: open, interoperable, and edge native. 

This is how agentic native systems are operationalized at scale. 

The post Operationalizing AI Agents on Devices first appeared on mimik.

]]>
On-Device Analytics and Observability: New Frontiers in the Data Landscape https://mimik.com/on-device-analytics-pioneering-new-frontiers-in-the-data-landscape/ Tue, 18 Jun 2024 23:17:45 +0000 https://mimik.com/?p=82784 Abstract: This article explores how mimik’s edgeEngine enhances the landscape of data analytics and observability, providing tools for more precise, timely, and actionable data insights. While not an analytics engine itself, mimik’s technology significantly improves data collection and preprocessing, serving as a robust foundation for deeper analytics processing by third-party systems or higher layers within […]

The post On-Device Analytics and Observability: New Frontiers in the Data Landscape first appeared on mimik.

]]>
Abstract:

This article explores how mimik’s edgeEngine enhances the landscape of data analytics and observability, providing tools for more precise, timely, and actionable data insights. While not an analytics engine itself, mimik’s technology significantly improves data collection and preprocessing, serving as a robust foundation for deeper analytics processing by third-party systems or higher layers within the architecture. With the increasing demand for offline-first capabilities, privacy, and security, mimik’s edgeEngine empowers analytics and observability solutions with advanced on-device processing. This approach enables the application of specific data policies, offers capabilities for staged analytics, and structures data before it is sent to different endpoints according to policy requirements. While edgeEngine does not directly deliver analytical summaries or recommendations, it empowers solution providers to build more powerful, secure, and responsive analytics systems, positioning them at the forefront of innovative data processing.

Introduction:

Imagine a world where devices are not just passively sending data but are active participants in real-time decision-making. The mimik Hybrid Edge Cloud (HEC) technology fulfills this vision. With mimik edgeEngine—a cloud-native software operating environment—applications can utilize the power of smart devices to perform real-time data processing directly on the device and securely expose the outcome via APIs to the rest of the system, ensuring that only those with the right credentials and permissions can access it. This includes providing capabilities such as an API gateway, AI agents, and microservices runtime. The mimik edgeEngine supports nearly all operating systems and can run on a variety of devices, including smartphones, tablets, cameras, microcontrollers, robots, and drones. This platform enables offline-first operations, reducing latency, ensuring data privacy, and minimizing reliance on cloud connectivity. By facilitating seamless integration and discovery of microservices workloads, including AI agents, mimik’s edgeEngine delivers a more responsive and efficient system architecture.

Transformative Power of On-Device Analytics

Imagine a car detecting a driver’s medical emergency through a wearable device. Instantly, on-device analytics processes the data, alerts healthcare services, informs emergency contacts, and coordinates with smart city traffic systems to ensure a clear path to the nearest hospital. This real-time, cross-domain system interaction exemplifies the transformative power of on-device analytics.

The Role of On-Device Analytics

On-device analytics processes data where it’s generated, offering significant advantages:

  • Reduced Latency: Immediate decision-making without cloud dependency.
  • Enhanced Privacy: Local data processing minimizes exposure.
  • Improved Efficiency: Real-time responses and reduced bandwidth usage.

The Role of AI in On-Device Analytics

Artificial Intelligence (AI) enhances on-device analytics by enabling systems to learn from data, identify patterns, and make informed decisions. AI agents can operate on end devices where most data is generated, allowing for real-time responses to dynamic conditions. This capability is crucial in scenarios where immediate action is required whether for safety-critical applications, industrial automation, personalized user experiences, or other situations where rapid decision-making is essential, with or without internet connectivity.

AI can enhance on-device analytics through:

  • Predictive Analysis: AI algorithms can predict potential issues before they occur, enabling preventive measures. For instance, AI can analyze health data from wearable devices to predict a possible cardiac event and alert medical professionals in advance.
  • Real-Time Decision Making: By processing data locally, AI agents can make instantaneous decisions. This is critical in a variety of industries, including industry 4.0, manufacturing, automotive, and healthcare, where immediate responses are essential for optimizing operations.
  • Personalization: AI can analyze user data to provide personalized experiences. In smart homes, AI can learn the habits and preferences of residents to optimize energy usage and enhance comfort.
  • Enhanced Security: AI can detect anomalies in data that might indicate security threats, allowing for immediate action to protect sensitive information.

How Analytics Works and mimik’s Impact

Analytics involves three primary phases: Data Collection, Transport, and Consumption.

Data Collection:

Data Collection is the first critical phase of analytics, encompassing logs, metrics, and traces. Logs capture specific events, providing a detailed account of what happens within the system. Metrics measure system performance at any given time, offering insights into the system’s operational health. Traces link these events and metrics to specific system components, helping to identify and attribute occurrences accurately. mimik enhances this phase by enabling devices with a microservice runtime and API gateways, significantly increasing observability. With edgeEngine, data can be collected at multiple levels—from the operating system and platform-specific APIs to user-permitted and application-level data—ensuring a comprehensive capture of system behaviour.

  • Logs: Capture specific events.
  • Metrics: Measure system performance.
  • Traces: Attribute events to system components.

mimik’s Added Value:

  • Increased Observability: By enabling devices with microservice runtime and API gateways, mimik significantly enriches data collection. This integration increases the number of data sources, capturing more granular details about system behavior.
  • Data Levels: mimik’s edgeEngine allows data collection at multiple levels (OS, platform without user permission, platform with user permission, and application level), ensuring comprehensive data capture directly from devices.
  • End-to-End View: Microservices running on smart devices act as active data sources, providing a holistic view of system performance and behavior.

Transport:

Once data is collected, it needs to be transported efficiently. This phase deals with both structured and unstructured data, optimizing it for transport. On-device processing can perform advanced filtering, reducing the volume of data transmitted over networks. mimik’s platform supports on-device microservices that perform machine learning-driven filtering, ensuring that only valuable data is sent. Additionally, on-device processing enhances the semantic value of data, improving its quality before it reaches central systems. By aggregating and buffering data optimally, mimik minimizes connection requests and reduces bandwidth usage, making data transport more efficient and cost-effective.

  • Data Types: Structured and unstructured data.
  • Optimization: On-device filtering and caching improve efficiency and reduce costs.

mimik’s Added Value:

  • Enhanced Filtering: On-device microservices can perform advanced filtering to reduce the data volume sent over the network. These filters can be machine learning-driven, ensuring only valuable data is transmitted.
  • Semantic Enhancement: On-device processing can enhance the semantic value of data, improving its quality before it reaches central systems.
  • Efficient Aggregation and Buffering: mimik’s platform allows for optimal aggregation and buffering, minimizing connection requests and reducing bandwidth usage.
  • Structured Data Transmission: Data can be structured and enriched according to policies (security, compliance), ensuring efficient and compliant data transport.

Consumption:

The final phase of analytics is data consumption, which can occur either in real-time at the source (early consumption) or further along the data stream for aggregated business intelligence (late consumption). mimik enables microservices to process data locally on smart devices, making it actionable in real-time. This local processing enhances the semantic value of data and reduces latency, ensuring that insights are available faster. Furthermore, results from central processing can be pushed back to devices, providing minimal latency and optimal performance. This approach not only improves efficiency but also ensures that data-driven decisions can be made swiftly and securely.

  • Early Consumption: Real-time decision-making at the source.
  • Late Consumption: Aggregated data for high-level business intelligence.

mimik’s Added value:

  • Local Processing: By enabling microservices on smart devices, mimik facilitates on-device data processing, making it actionable in real-time.
  • Enhanced Data Value: Local processing enhances the semantic value of data, whether for local or central consumption.
  • Reduced Latency: Results from central processing can be pushed to devices, ensuring minimal latency and optimal performance.

Enhancing Observability and Business Intelligence

On-device analytics aids in two crucial areas:

  1. Observability:
  • System Behavior: Provides precise metrics and traces, integrating devices as active system components.
  • Debugging and Optimization: Enhances understanding and management of system performance.

mimik’s Added Value:

  • Behavioral Insights: By treating devices as integral parts of the system, mimik enables better insights into system behavior, helping identify issues more accurately.
  • Integrated Debugging: Devices with compute capabilities become part of the system, improving debugging and optimization processes.
  1. Business Intelligence:
  • System Usage: Offers insights into user interactions and system refinement.
  • Targeting and Refinement: Improves delivery and consumption targeting based on real-time data.

mimik’s Added Value:

  • Enhanced Understanding: By integrating on-device microservices, mimik provides deeper insights into system usage and performance, improving business intelligence.

mimik’s edgeEngine Capabilities

mimik’s edgeEngine provides a runtime environment for microservices, enabling real-time context and decision-making directly on devices. This leads to:

  • Accurate Context: Captures real data at the source, enhancing system reliability.
  • System Integration: Treats devices as integral parts of the system, not just endpoints.
  • Dynamic Service Mesh: Creates an adaptable system that can expand or shrink based on local conditions.

Conclusion

mimik’s platform transforms data analytics by enabling contextual on-device operations. This brings real-time processing, enhanced security, and reduced latency directly to where data is generated. By integrating endpoint devices as integral parts of the system, mimik supports offline-first, real-time data processing and decision-making directly on the device. This approach significantly reduces latency, adheres to data privacy and sovereignty, and securely exposes outcomes via standard APIs with granular access control right from the end-device.  By enabling applications to utilize the power of devices—from smartphones and tablets to drones and microcontrollers—with advanced cloud-native functionalities, mimik ensures seamless integration and discovery of AI agents and microservices workloads.

The ability to perform sophisticated analytics directly on devices paves the way for faster and more reliable decision-making across various industries, driving superior sustainability, adaptability, efficiency, performance, and user experience. mimik is not only transforming how data is processed and utilized in real time but also enabling new business models and opportunities within the Data-as-a-Service ecosystem. With mimik, the future of intelligent systems lies in the seamless and efficient management of data, driving innovation and unlocking unprecedented value on-device.

The post On-Device Analytics and Observability: New Frontiers in the Data Landscape first appeared on mimik.

]]>
The four stages of edge AI https://mimik.com/the-four-stages-of-edge-ai/ Mon, 27 Nov 2023 20:45:36 +0000 https://mimik.com/?p=81939 In the rapidly evolving world of edge computing and artificial intelligence (AI), there are several crucial stages to consider. This blog delves into the complexities and innovations at each stage, beginning with Local Execution, where AI models are deployed directly on edge devices for real-time data processing. We then explore Contextualization, focusing on the local […]

The post The four stages of edge AI first appeared on mimik.

]]>
In the rapidly evolving world of edge computing and artificial intelligence (AI), there are several crucial stages to consider. This blog delves into the complexities and innovations at each stage, beginning with Local Execution, where AI models are deployed directly on edge devices for real-time data processing. We then explore Contextualization, focusing on the local handling of contextual information for personalized responses. The third stage, AI to AI Communication, examines the critical coordination between multiple AI nodes, facilitated by edge microservices. Finally, AI-adapted Choreography highlights how multiple AI models across an edge network can dynamically interact with each other, optimizing overall system performance. Through these stages, the role of mimik technology emerges as pivotal, enabling seamless integration and efficient operation of AI models in edge computing environments.

Stage 1: Local Execution

In this stage, the focus is on deploying the AI model at the edge, which means running the model directly on the device that generates the data. Typically, the model is trained in the cloud and then pushed to the edge devices such as cameras or sensors. The purpose is to perform real-time recognition or analysis of data streams locally without relying on constant communication with the cloud.

The information generated by the local execution can be handled in different ways. If the recognition results are conclusive, only the result is sent to the cloud for further processing or storage. However, if the recognition is inconclusive, the image or relevant data may be sent to the cloud to retrain the model. Additionally, a lower resolution of the data stream can be archived for reference purposes.

For example, consider a security camera system using edge computing. The camera captures live video footage and runs an AI model locally for real-time object detection. Instead of sending every frame to the cloud for analysis, the AI model is deployed directly on the camera. The camera processes the video stream locally, identifies objects of interest, and sends only the relevant information, such as detected objects and their locations, to the cloud for further processing or storage.

It is essential to separate the model from the execution process because models need regular updates and the ability to manage the payload remotely. Mimik enables this separation by treating the model as a part of the edge microservice running on the device. The microservice acts as an interface between the cloud and the AI process, abstracting the handling of model updates from the recognition process. Another edge microservice handles the results, whether sending them to the cloud or other local systems. This ensures that the model can be easily updated and fine-tuned without disrupting the process of recognition or analysis.

By exposing the capabilities of handling the model and results as a local API, mimik simplifies the development process of AI solutions, making integrating edge computing into the workflow easier.

Stage 2: Contextualization

In this stage, the model is executed locally, and the handling of the context in which the process occurs is also done locally. The context refers to events received by the device running the process or other devices within the same cluster, such as events triggered by user inputs through a UI or sensor inputs.

Local contextualization allows for the personalization of the model based on user preferences or specific scenarios. By processing events locally, edge devices can provide tailored experiences or responses without constantly sending data to the cloud for analysis and decision-making.

For example, consider an intelligent home system using edge computing. The system includes various devices like smart speakers, cameras, and sensors. Each device runs AI models locally to process data and respond to user commands. When a user speaks a command to a smart speaker, the AI model on the speaker processes the command locally, taking into account the context of the user’s preferences and the current state of the home environment. The speaker can provide personalized responses or control other devices within the cluster based on local contextual information.

Mimik achieves contextualization by running multiple edge microservices on the same node and facilitating interaction with other edge microservices on different nodes. This decentralized approach minimizes the need for data transfer to the cloud, as the devices within the cluster can communicate and share contextual information directly.

Stage 3: AI to AI communication

In this stage, there is the realization that a complex system at the edge will be made of many nodes that can have an AI handling the node’s logic. In this environment, while the execution of the model happens at the edge, the integration between each AI is coordinated via the cloud. It must be possible to allow direct communication between each AI to handle local decision-making by having the different AI either exchange the models or exchange the events generated by the API process using the models.

For example, consider an autonomous driving system using edge computing. The system comprises multiple edge devices, such as cameras, LiDAR sensors, and control units, each running its own AI model for perception, decision-making, and control. These devices must exchange information and coordinate safe and efficient driving decisions. Instead of relying solely on a centralized system in the cloud, direct communication between the edge devices’ AI models is essential for local decision-making.

Mimik enables AI-to-AI communication by allowing models to be handled by edge microservices and creating an ad-hoc edge service mesh. This allows direct communication between edge microservices within the same node or between edge microservices running on different nodes. With mimik, multiple AIs at the edge can exchange information or models with a well-defined contract, facilitating coordinated actions without heavy reliance on a centralized cloud system.

Stage 4: AI-adapted choreography

In this stage, the focus is on dynamically choreographing the behavior of multiple AI models across the edge network to optimize overall system performance, resource allocation, and coordination. The communication between AI models within each node and between nodes adapts to maximize the relationship of a collection of nodes.

For example, let’s consider a smart city infrastructure using edge computing. The infrastructure consists of various edge devices deployed throughout the city, such as traffic cameras, environmental sensors, and smart streetlights. Each device runs its AI model to perform specific tasks like traffic monitoring, air quality analysis, and intelligent lighting control.

In the AI-adapted choreography stage, the AI models within each device collaborate and communicate to optimize the overall performance of the smart city infrastructure. The models exchange information about traffic conditions, environmental data, and lighting requirements. Based on this information, they dynamically adapt their behavior to ensure efficient traffic flow, minimize energy consumption, and respond to changing environmental conditions.

Since these systems are generally developed by many organizations (different standards, different protocols), the context and the AI of each system component will also help define the protocol between the components, allowing components that are not necessarily made to communicate with each other to exchange information.

Mimik plays a crucial role in enabling AI-adapted choreography by providing the infrastructure for communication and coordination between the AI models across the edge network. It allows the AI models running on different devices to exchange data, share insights, and collectively make decisions to optimize the operation of the smart city infrastructure. Mimik’s edge service mesh facilitates the dynamic choreography of AI models and ensures efficient collaboration.

In summary, in the AI-adapted choreography stage, mimik enables the dynamic coordination and optimization of multiple AI models across an edge network, allowing them to collectively achieve better system performance, resource allocation, and coordination in complex scenarios like a smart city infrastructure.

Conclusion

The role of mimik, as mentioned in the text, is to enable these stages by treating the AI model as a part of the edge microservice running on the device. It abstracts the handling of model updates from the recognition process and facilitates the exchange of information between edge microservices. By providing a local API and creating an ad-hoc edge service mesh, mimik simplifies the development process and integration of edge computing into AI workflows.

References

  1. “Edge Computing: A Survey” by Shi et al. (IEEE Access, 2016):
    • This survey paper overviews edge computing, its challenges, and potential applications, including AI at the edge.
  2. “Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing” by Satyanarayanan et al. (Proceedings of the IEEE, 2019):
    • This paper discusses the concept of edge intelligence, including the execution of AI models at the edge and the benefits it brings.
  3. “Bringing AI to the Edge: Distributed Learning in IoT Systems” by Yang et al. (IEEE Network, 2019):
    • This article explores the challenges and techniques for deploying AI models at the edge, including model training and coordination in distributed IoT systems.
  4. Official mimik documentation and resources:
    • To understand the specific capabilities and features of mimik in enabling edge computing and AI integration, you can refer to the official mimik documentation, whitepapers, and developer resources available on the mimik website or other official channels.

The post The four stages of edge AI first appeared on mimik.

]]>
Endpoint Device Security, The Missing Link in SASE https://mimik.com/endpoint-device-security-the-missing-link-in-sase/ Fri, 20 Oct 2023 09:07:17 +0000 https://stg-2x.mimik.com/?p=79738 Many organizations are turning to Secure Access Service Edge (SASE) solutions to fortify their security posture.

The post Endpoint Device Security, The Missing Link in SASE first appeared on mimik.

]]>
Introduction

In today’s rapidly evolving digital landscape, ensuring the security of endpoint devices has become more critical than ever before. The proliferation of remote work, mobile devices, and cloud-based applications has introduced new challenges for safeguarding sensitive data and maintaining network integrity. In response to these challenges, many organizations are turning to Secure Access Service Edge (SASE) solutions to fortify their security posture.

 

Traditional Security Implementation

Traditional security models are often described as a “castle-and-moat” approach. In this model, the organization’s network is considered the castle, and security solutions such as firewalls and VPNs act as the moat. Everything inside the network perimeter is considered trusted, while external elements are treated with suspicion.

  1. Perimeter-based Security: Traditional security relies on a perimeter-based model where the organization’s network is the fortress, and security solutions (firewalls, VPNs, etc.) serve as the protective moat. Elements inside the perimeter are trusted, while anything external is treated cautiously.

  2. Centralized Security Appliances: Security solutions, like firewalls and intrusion prevention systems, are often centralized, especially at the data center. This often results in traffic being backhauled from remote locations or branches to this central point for inspection.

  3. VPN for Remote Access: Remote users typically connect to the network using VPNs, which can introduce latency since traffic from remote users is tunneled to the central office before accessing the internet or other resources.

  4. Disparate Solutions: Traditional setups might have various standalone solutions – a firewall from one vendor, a secure web gateway from another, VPNs from another, etc. This can complicate integration and management.

SASE Security Implementation

While traditional security implementations were well-suited for a time when most resources and users were centralized, the shift towards cloud services, remote work, and mobile users has revealed its limitations. SASE aims to address these modern challenges by offering a more flexible, integrated, and decentralized cloud-first security solution optimized for the current state of enterprise computing. Here’s how it differs:

  1. Identity and Context-aware Security: SASE treats every access attempt as untrusted instead of relying on a network perimeter. Access is granted based on the user’s or device’s identity, the access request’s context, real-time analytics, and other factors.

  2. Decentralized Security Services: Security is implemented closer to the point of access, often at the edge or as a cloud service. This means users connect to their nearest security service point, reducing latency.

  3. Integrated Suite of Services: SASE aims to combine various security services like Secure Web Gateways (SWG), Cloud Access Security Brokers (CASB), Firewall as a Service (FWaaS), Zero Trust Network Access (ZTNA), etc., into a unified platform. This integrated approach simplifies management and ensures that security policies are applied everywhere.

  4. Optimized for Cloud and Mobile: Traditional security models have shown strains as organizations have shifted to cloud services and remote work. SASE is designed with the cloud and mobility in mind, ensuring that security policies are consistently applied no matter where users are or which devices they use.

  5. Scalable and Flexible: Being cloud-native, SASE solutions can scale as required and adapt quickly to changing business needs.

The Role of the Device in SASE Implementation

While SASE drastically changes the enterprise security approach, it still considers the end-user device, whether mobile, non-mobile, or IoT, as an integral part of the security solution. In a SASE (Secure Access Service Edge) solution, the services primarily reside in the cloud, leveraging a global network of points of presence (PoPs) to provide security and networking services as close to the end-user or device.

However, specific components or agents might run on the end-user’s device to interact with these cloud-based services. Here’s what typically runs on the device in a SASE architecture:

  1. Endpoint Agent/Client Software: This is a lightweight software client installed on the user’s device (laptop, smartphone, tablet, etc.). The agent is responsible for:

    • Initiating secure connections to the SASE cloud.

    • Enforcing local security policies.

    • Monitoring device health and security posture.

    • Redirecting traffic to the SASE service for security checks and policy enforcement.

  2. Zero Trust Network Access (ZTNA) Components: ZTNA ensures that every access attempt to resources, even from within the network, is authenticated and verified. The endpoint agent often includes components to enforce ZTNA principles, such as:

    • Identity verification.

    • Context-aware access controls (based on device health, location, user role, etc.).

    • Application-level connectivity (connecting the user only to the specific applications they need, not the entire network).

  3. Data Encryption Tools: The agent ensures that data in transit is encrypted when connecting to the SASE cloud or other organizational resources.

  4. Local Security Services: While most security services in a SASE architecture are cloud-based, certain local checks or policies might still be enforced on the device. This can include:

    • Local firewall rules.

    • Host intrusion prevention systems.

    • Data loss prevention checks for sensitive data.

  5. Security Posture Check: Before granting access to resources, the SASE solution might check the device’s security posture. This can involve verifying:

    • Antivirus/antimalware status.

    • Operating system and software patch levels.

    • Compliance with organizational security policies.

  6. Management and Configuration Tools: These allow IT teams to configure the agent’s behavior, update policies, and integrate with other IT management tools.

  7. Logging and Monitoring Components: The agent might also collect logs and other relevant data for analysis. This information can be sent to the central SASE solution for anomaly detection, analysis, and reporting.

The exact components and functionalities can vary depending on the specific SASE solution provider and the organization’s requirements. However, SASE aims to keep the on-device footprint lightweight and leverage the cloud for most heavy lifting, ensuring consistent policy enforcement and optimal performance regardless of the device’s location. These aims do not consider the latest edge-in approach and microservice architecture developments, which the mimik platform enables. This includes:

  • Running microservices that expose API directly on devices

  • Handling ad-hoc edge service meshes where microservices interact with each other directly without going through the cloud

The Role of mimik HEC in SASE

Implementation

Now, let’s explore how mimik Hybrid Edge Cloud (HEC) software platform can contribute to the implementation of SASE, enhancing its capabilities for securing endpoint devices.

The mimik HEC is crucial in enhancing SASE implementation by providing innovative solutions and components that ensure secure, efficient, and context-aware protection for endpoint devices. Here’s how mimik contributes:

  1. Distributed Computing: mimik facilitates distributed computing at the edge, reducing latency and enabling real-time analytics and response, essential for security solutions like SASE.

  2. Edge Server Capabilities: Devices powered by mimik can act as edge cloud servers, deploying SASE solutions closer to data sources or users, improving performance, and reducing the load on central servers.

  3. Interoperability: mimik’s platform fosters interoperability between different cloud services, edge devices, and on-premises resources, a critical requirement for implementing SASE in a hybrid environment.

  4. Resource Optimization: Implementing SASE solutions with mimik edgeEngine on the mimik hybrid edge cloud platform can optimize network and computing resource utilization by balancing the load between cloud, edge, and on-premises.

  5. Enhanced Security: Integrating security microservices at the edge using mimik edgeEngine enables granular and context-aware security enforcement, essential for Zero Trust Network Access (ZTNA) and Secure Web Gateway (SWG) components of SASE.

Edge-in Approach with mimik

One of the unique aspects of mimik’s contribution is the ability to move or complement SASE functions further to the edge, even directly on the user or IoT device. This approach enables a more contextualized and efficient security strategy, allowing for device-to-device interaction that is impossible in a traditional cloud-first SASE implementation.

mimik’s Impact on Key SASE Components

Looking at the significant components of a SASE architecture, it is possible to understand the impact of an edge-in approach enabled by the mimik platform:

  • Cloud Access Security Broker (CASB): By running CASB as an edge microservice on the device itself (eCASB), organizations can benefit from:

    • Decentralized Data Management: As cloud applications proliferate, so does the data between devices and these applications. With edge computing capabilities from solutions like mimik edgeEngine, there’s potential for more localized data processing and decision-making at the data source before sending it out. This can be leveraged to inspect data locally on a device before it’s sent to or received from a cloud service, aligning with some CASB functions.

    • Local Policy Enforcement: With the ability to execute applications and processes at the edge, organizations could run lightweight, localized CASB-like functions on the device. This would mean real-time policy enforcement even before data or requests hit the main CASB solution in the network path, allowing the ability to do multi-cloud brokering right from the device (at the edge) instead of in the cloud.

    • Enhanced Performance: By integrating edge capabilities with CASB functionalities, certain processes can be offloaded to the edge, reducing latency. For instance, initial policy checks or data classifications, augmentation, and tagging can be done on-device, reducing the need for all traffic to be routed through a central CASB solution.

    • Integration with Other Edge Services: As part of a broader edge ecosystem, CASB functionalities can be combined with other edge services, enabling more comprehensive security and data management solutions tailored for specific environments or use cases.

    • Custom CASB Solutions for Unique Use Cases: Developers can potentially build custom CASB solutions tailored to specific organizational needs or niche applications, leveraging the flexibility and capabilities provided by mimik edgeEngine.

  • Zero Trust Network Access (ZTNA): mimik platform took a zero-trust network approach as a core feature of the edge system. This approach allows edge engine to provide the following:

    • Localized Access Control: With computing capabilities extended to the edge; access decisions might be made locally, right where the request originates. This could result in reduced latency and more efficient access controls, as not every decision must be routed through a centralized authority.

    • Enhanced Security for IoT Devices: IoT devices can often be weak points in a network. If these devices are empowered with edgeEngine capabilities and integrated with ZTNA principles, they could have enhanced security postures, mitigating some of the risks associated with IoT deployments.

    • Integration with Decentralized Applications: As more applications and services become decentralized and move to the edge, integrating ZTNA principles becomes crucial. Using a platform like mimik edgeEngine, developers could create applications with built-in ZTNA functionalities tailored for specific edge use cases.

    • Continuous Authentication and Authorization: ZTNA emphasizes continuous verification, not just at the beginning of a session. With edge computing capabilities, this continuous check can be done more efficiently, utilizing real-time device data.

    • Micro-segmentation at the Edge: ZTNA often employs micro-segmentation to isolate and protect network resources. With edgeEngine, this segmentation could be extended to the edge, providing more granular isolation and protection of resources, data, and services.

  • Next-Generation Firewall (NGFW): The mimik edgeEngine resides on top of the operating system and, therefore, does not have deep access to the network stack and does not enable the implementation of features like DPI. However, by implementing an API Gateway, it is possible for a microservice running within the edge engine to enable the following features:

    • Localized Traffic Inspection: With applications and services running on the edge, localized traffic inspection and filtering at the message level can potentially be done. Rather than sending all traffic through a central NGFW, initial inspections and policy checks could be performed on-device or at the edge, enhancing responsiveness and reducing unnecessary traffic loads on central security appliances.

    • Context-rich Policies: The edgeEngine can provide granular, context-rich data from devices, given its edge-centric architecture. This context can be valuable for NGFW functions, allowing for dynamic and adaptive security policies based on real-time device status, user behavior, location, etc.

    • Protection of IoT Devices: IoT devices, often seen as vulnerable network points, could benefit from localized firewall capabilities. By integrating NGFW functionalities at the edge, there’s potential for better security postures for IoT deployments, with immediate threat detection and response.

    • Integration with Edge Services: As more services move to the edge, there’s an increasing need to ensure these services are secured. By integrating NGFW capabilities into edge-based services powered by mimik edgeEngine, there’s an opportunity for holistic security that’s tailored for edge-specific scenarios.

    • Decentralized Threat Detection and Response: By leveraging edge computing capabilities, threat detection and response can potentially be decentralized. If an anomaly or potential threat is detected on a device or within a network segment, immediate action can be taken at the edge, even before the central NGFW or security operations center is alerted.

    • Scalability and Adaptability: With the growth of connected devices and increasing network complexity, scalability becomes a concern for traditional NGFWs. By offloading some functionalities to the edge, there’s potential for more scalable security solutions that adapt to changing network conditions and demands.

  • Secure Web Gateway (SWG): Allowing microservice to run directly on the device on top of the mimik edgeEngine and this behind an API Gateway, it is possible to enable an eSWG which will have the following capabilities:

    • Real-time Content Filtering: An eSWG running on the device can provide real-time content filtering, blocking malicious or inappropriate content before it reaches the user’s device.

    • Local Policy Enforcement: Organizations can implement customized content filtering policies at the edge, ensuring that users are protected from web-based threats even when they are not connected to the corporate network.

    • Reduced Latency: By offloading content filtering to the edge, latency is minimized, resulting in faster web access for users.

    • Improved Performance: An eSWG can optimize web traffic, reducing the load on central SWG solutions and improving overall network performance.

    • Integration with Local Services: Organizations can integrate their eSWG with other local services and security components to provide a comprehensive security posture.

    • Enhanced Privacy: With an eSWG at the edge, user data remains on the device, enhancing privacy and reducing the need to send user data to centralized SWG solutions.

 

Conclusion

Securing endpoint devices is paramount in the ever-evolving landscape of cybersecurity and remote work. Traditional security models have limitations, especially in the face of the cloud, mobility, and the Internet of Things (IoT). Secure Access Service Edge (SASE) represents a new paradigm in security, offering an integrated, cloud-native, and context-aware approach. The mimik HEC is pivotal in enhancing SASE implementation by enabling distributed computing at the edge, fostering interoperability, and providing the tools for secure, efficient, and context-aware protection. By moving or complementing SASE functions to the edge, mimik’s innovative approach enhances security, reduces latency, and opens new possibilities for device-to-device interactions, bolstering the security posture of organizations in a rapidly changing digital world. With SASE and mimik, the future of endpoint security looks brighter, more efficient, and more resilient than ever before.

The post Endpoint Device Security, The Missing Link in SASE first appeared on mimik.

]]>
Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere https://mimik.com/beyond-boundaries/ Wed, 16 Aug 2023 02:00:00 +0000 https://stg-2x.mimik.com/?p=79190 In a cloud-first architecture, API gateways play a crucial role in enabling communication between different cloud services and applications.

The post Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere first appeared on mimik.

]]>
In a cloud-first architecture, API gateways play a crucial role in enabling communication between different cloud services and applications. They act as a central point of control and provide a unified interface to the clients, making it easier to manage and monitor the overall system.  API gateways also provide a layer of abstraction between the client and the cloud services hence allowing developing different applications while still using the same services.

In essence, an API gateway is a server that acts as an intermediary between the client and the cloud services performing many tasks such as authentication, rate limiting, caching, and protocol translation. Therefore, the API gateway can improve the performance, scalability, and security of the overall system architecture. It is commonly used in microservices and serverless architectures.

In the conventional API Gateway market, various vendors offer API Gateway solutions for managing, securing, and exposing APIs to external or internal applications. The main players are Amazon API Gateway, Kong, Rapid, Google Cloud (Apigee) and Azure API Management, among others. They offer different solutions based on their functionality and features such as Proxy, Transformation Gateways, Security, Orchestration and Monetization. Developers can choose the right one according to their specific requirements and use cases.

Examining this offering, we can identify three distinct types of API gateways: façade, exposure, and listening endpoint.

The first one, the API Gateway, acts as a facade for service implementations that operate in separate environments. The API Gateway serves as a single-entry point for all incoming API requests, abstracting the complexity of underlying microservices or distributed systems. These gateways are engineered to manage specific protocols such as HTTP and WebSocket, and they primarily focus on addressing security concerns, particularly TLS security. By using an API Gateway as a facade, organizations can simplify the management of their APIs and services, improve security, and enhance the developer experience for API consumers.

The second one, often call API Exposure Gateway, is an API Gateway which alters or enhances the implemented API that runs on a different environment. It focuses on making APIs accessible to external consumers, partners, or third-party developers. The main goal of an API Exposure Gateway is to facilitate secure, controlled, and efficient access to APIs while ensuring a positive developer experience. It can implement business logic, including caching, throttling, and even metering for billing purposes. API Exposure Gateways are crucial for businesses looking to expose their APIs to a broader audience, foster innovation, and create new revenue streams through API monetization. By providing a secure and controlled environment for API consumption, these gateways enable organizations to maximize the value of their APIs while minimizing risks.

The last one, the listening endpoint API terminates the network connection, is commonly utilized in serverless environments to instantiate the process required for executing the operation requested by the API call. This endpoint acts as an entry point for clients to access the functionality provided by the API, and it’s responsible for processing incoming requests, executing the appropriate actions, and returning the expected responses. In most cases, the API and the service function within the same environment.

Though the gateways vary in their execution, their primary goal remains consistent to act as a central entry point and intermediary for managing, securing, and exposing APIs of cloud services to external consumers or internal applications. It enables cloud services to be utilized by other cloud services or client applications without needing to comprehend the service’s inner workings. This gateway can operate in the cloud or near the client applications as an edge cloud broker.

Suppose for a moment, this cloud-centric approach didn’t exist, and it was feasible to run microservices (or functions as a service) at the edge within the device or system hosting client applications. In that case, a reverse API gateway becomes necessary to expose these microservices. However, instead of exposing cloud services to client applications, it focuses on exposing edge microservices services to either client applications or other edge microservices running on different nodes or cloud services. Consequently, each node within a system serves as an individual server running microservices with exposed APIs at the software level, establishing a ad hoc edge service mesh among all nodes capable of discovering one another.

In this edge first scenario, the reverse gateway would function as a local API Gateway, managing the microservices within the device itself. It would have a vital role in managing and securing the communication between microservices and client applications. By functioning as a local API Gateway, it would manage, secure, and optimize API traffic within the device or system, providing a unified entry point for accessing microservices and improving overall performance and security. Moreover, the local API Gateway would also enable better resource utilization and faster response times as the microservices would be running in the same environment as the client applications.

This reverse API gateway is a natural next step in the evolution of the API Architecture, well described in the Netflix technology blog.

Left to right: 1) Accessing the monolith via a single API, 2) Accessing microservices via separate APIs, 3) Using a single API gateway, 4) Accessing groups of microservices via multiple API gateways. Source: Netflix Technology Blog

Edge microservice can either access each other edge microservice on different nodes directly without going thru the cloud via reverse API gateway or access cloud microservice via a single API gateway.

The concept of device-as-a-service will then be established, allowing client applications to utilize features from a single device or a collection of devices through a series of APIs without needing to comprehend the inner workings of the implementation. This will spark a surge in innovation as it enables the development of applications using systems without requiring expertise in those specific systems. As an example, considering the automobile industry’s ongoing shift towards SDV, it is crucial to begin revealing car functionalities to developers outside of the automotive realm to harness the creative potential within the mobile app industry. A reverse API gateway is essential for accomplishing this objective.

mimik’s edgeEngine provides this reverse API gateway, enabling each node to serve as a data source at the application level. mimik’s edgeEngine allows client applications to utilize features from a single device or a collection of devices through a series of APIs without needing to comprehend the inner workings of the implementation. This edegEngine comprises an API gateway, an OS-agnostic runtime environment, a discovery service for nodes and edge microservices, and an edge analytic platform that enables each node to serve as a data source at the application level. This enables the development of applications using systems without requiring expertise in those specific systems, sparking a surge in innovation.

The post Beyond Boundaries: Enabling Performance and Security with API Gateways Everywhere first appeared on mimik.

]]>
mimik for Digital Twin https://mimik.com/mimik-for-digital-twin/ Thu, 10 Aug 2023 06:34:01 +0000 https://stg-2x.mimik.com/?p=79436 mimik for Digital Twin

The post mimik for Digital Twin first appeared on mimik.

]]>
Abstract

A digital twin is a virtual representation of a physical object, process, or system. It is a computerized model that simulates the behavior of a real-world object or system in real-time, providing a detailed and accurate reflection of its physical counterpart.

A digital twin is created by collecting data from various sources, such as sensors, cameras, and other IoT devices, and processing that data using machine learning algorithms and other analytical tools. The resulting model can monitor, analyze, and optimize the physical system’s performance and predict future behavior and outcomes.

Digital twins are commonly used in manufacturing, aerospace, and energy industries. They can be used to simulate the operation of complex machinery, equipment, and systems and identify potential issues or inefficiencies before they occur in the real world. They are also used in building design and construction to optimize performance, maintenance, and energy efficiency.

A digital twin is composed of two main steps:

  1. development phase, pre-production (aka pre-prod)

  2. deployment and update phase in production (aka post-prod)

Pre-prod digital twin

Looking at the lifecycle development of a solution that involves embedded software components (a car, a manufacturing line, etc..) utilizing QNX, many variants of Linux, Android, and even IOS when a user phone is involved, a developer implementing a new feature does not have the actual environment available as a cloud developer has a Development Environment, aka DEV, to do the implementation and QA for testing the compliance of the implementation. To remediate this problem, a simulation of the environment has to be created. This is where the need for a pre-prod digital twin is emerging.

Adopting modern development solutions and creating an environment in the cloud is a natural solution for such simulation. And because in a cloud environment, the resource is virtualized and generally pooled using Kubernetes orchestrator, a natural consequence is to containerize every simulation component. The developer implementing a new feature must dynamically deploy images and containers using Kubernetes.

This will work well assuming two following conditions:

  1. Any legacy software that runs in an actual environment needs to be containerized.

  2. The simulation in the cloud environment must closely mimic the actual environment.

These two conditions are difficult to realize since, in the actual environment for embedded systems, the usage of real-time operating systems is frequent, and containerizing legacy components has limitations when dealing with user interfaces and multi-processes within the same container. This means that once QA is passed in the cloud, transferring the new feature to the actual environment generally leads to new problems, making the whole cloud testing obsolete.

Another approach to creating a pre-prod digital twin is replicating the actual environment in the cloud. For that, it is often necessary to run an RTOS like QNX. However, as most of the container technologies (e.g., docker) depend on Operation System’s functions (e.g., c-group), it is not possible to run these containers on QNX. This is why there is a need for a technology that provides a run-time independent from the operating system. And this is what mimik edgeEngine provides.

Running QNX in the cloud and mimik edgeEngine on top of QNX in the cloud allows a developer to implement microservices or function-as-a-service. It is possible to have a seamless transition from the pre-prod digital twin to the actual environment.

Post-prod digital twin

Once the feature is deployed in real systems, it is essential to have a feedback loop to refine the simulation. It allows developers and system analysts to understand the behavior of the actual system and how these behaviors match the behavior of the simulated environment. And this is where a post-prod digital twin needs to be created.

One solution is to use the pre-prod digital twin instance to implement the post-prod digital twin. However, this implies the need to transfer a large amount of data to replicate in the cloud the context of the actual environment. This can be a source of many problems:

  1. Cost: the more data to be transferred, the more cost will be generated, either the cost of transport or the cost of processing, in particular, if it is to deal with low-level signals.

  2. Power consumption: it generally consumes more power to transmit data to a network than to process data locally and transmit results.

  3. Privacy: in some cases, the data to be transmitted is about the user and, therefore, transmitted data to the cloud may be breaching privacy regulation

One solution is to split the pre-prod digital twin into two parts, one part running in that existing system and the other as a consolidation in the cloud since one aspect of running in the cloud is to deal with multiple actual systems (e.g., cars) and therefore avoid bias when extracting a generic behavior.

Technology is needed to allow microservices to run in any environment (regular OS. real-time OS, main CPU, controllers) to do the pre-analysis and send smart signals to an aggregated simulation running in the cloud. And this is what mimik edgeEngine and its different editions (standard, main/child, controller/worker) provide.

The post mimik for Digital Twin first appeared on mimik.

]]>
How to address safety and security for software-defined vehicles https://mimik.com/how-to-address-safety-and-security-for-software-defined-vehicles/ Mon, 23 May 2022 03:31:00 +0000 https://stg-2x.mimik.com/?p=74395 Vehicles have evolved into connected smart gadgets on wheels, similar to how cellphones evolved into pocket computers. With more lines of code than the most significant operating system. The current complexity of a software-defined vehicle (SDV) can significantly effect time-to-market and, ultimately, the speed of innovation. Even more critically, these complications can obstruct the capacity […]

The post How to address safety and security for software-defined vehicles first appeared on mimik.

]]>
Vehicles have evolved into connected smart gadgets on wheels, similar to how cellphones evolved into pocket computers. With more lines of code than the most significant operating system. The current complexity of a software-defined vehicle (SDV) can significantly effect time-to-market and, ultimately, the speed of innovation. Even more critically, these complications can obstruct the capacity to meet safety and security criteria, at trimmest in the automotive industry.

Read more: https://www.autonews.com/sponsored/how-address-safety-and-security-software-defined-vehicles

The post How to address safety and security for software-defined vehicles first appeared on mimik.

]]>
mimik: Hybrid Edge Cloud Leveraging AWS to Support Edge Microservice Mesh https://mimik.com/mimik-hybrid-edge-cloud-leveraging-aws-to-support-edge-microservice-mesh/ Mon, 29 Mar 2021 04:50:43 +0000 https://stg-2x.mimik.com/?p=62626 mimik: Hybrid Edge Cloud Leveraging AWS to Support Edge Microservice Mesh

The post mimik: Hybrid Edge Cloud Leveraging AWS to Support Edge Microservice Mesh first appeared on mimik.

]]>

Michel Burger explains how hybrid edge cloud platform enables service mesh using edgeEngine supported by AWS services to enable direct device-to-device communication at the edge. You will learn how EKS, DynamoDB, and MongoDB are used to help manage the mesh network and how Kinesis, S3, and Snowflake are used to monitor it.

The post mimik: Hybrid Edge Cloud Leveraging AWS to Support Edge Microservice Mesh first appeared on mimik.

]]>
Microservice-Based Solutions in a Hybrid Edge Cloud Environment https://mimik.com/microservice-based-solutions-in-a-hybrid-edge-cloud-environment/ Tue, 13 Oct 2020 08:40:00 +0000 https://stg-2x.mimik.com/?p=56042 Edge Developer Conference | ECW2020

The post Microservice-Based Solutions in a Hybrid Edge Cloud Environment first appeared on mimik.

]]>

ECW 2020 Keynote by mimik Technology

Microservice-Based Solutions in a Hybrid Edge Cloud Environment | Chief Technology Officer Michel Burger.

Microservice-based solutions are now widely adopted in many industries, from gaming to IIoT. Different systems are provided to deploy and use serverless microservices at the edge in order to offload backend systems residing in the cloud. However, adopting microservice architecture in a hybrid edge cloud environment also means creating an ad-hoc peer to peer service mesh at the edge. This presentation will explore the impact of full usage of hybrid edge cloud in microservice-based solutions either at the architecture level or management level, by going through solutions where serverless microservices deployed on the extreme edge such as smart phone or smart industrial devices can discover each other and communicate directly and with a backend cloud infrastructure.​

The post Microservice-Based Solutions in a Hybrid Edge Cloud Environment first appeared on mimik.

]]>
edgeEngine: Familiar API’s https://mimik.com/edgesdk-familiar-apis/ Wed, 31 Jul 2019 01:54:25 +0000 https://stg-2x.mimik.com/?p=52059 mimik Chief Technology Officer Michel Burger & Chief Marketing Officer Phil Belanger have a 1-on-1 talk on what developers can look forward to.

The post edgeEngine: Familiar API’s first appeared on mimik.

]]>

mimik Chief Technology Officer Michel Burger & Chief Marketing Officer Phil Belanger have a 1-on-1 talk on what developers can look forward to.

The post edgeEngine: Familiar API’s first appeared on mimik.

]]>