Eskimo Logo Eskimo Components Download Eskimo Eskimo CE Documentation

Eskimo Community Edition
Release Notes

Read below the various Eskimo Version release notes

What's new in Eskimo-CE v0.5.0

Released on Apr 19, 2023

What's new ?

  • New UI Layout and theme : Eskimo's Web UI (Web Graphical User Interface) is significantly reworked following upgrade of JQuery to the latest version (3.6.0) and Bootstrap upgrade to latest version (5.2.0).
    Improved Menu scrolling and highlighting User Experience.
    Replaced Browser native alerts by a messaging window using Bootstrap Modal.
  • Fixed Kubernetes Deployment Management : Kube slave is not any more forced on the same node running Kube Master neither on any other node where it might not be desired. The whole kubernetes installation and service deployment approach is made much more consistent and coherent.
  • Kubernetes Deployment can be customized : Administration can choose the strategy when deploying services as ReplicaSets or StatefulSets on Kubernetes, either deploy them cluster wide (on every node of the cluster) or choose explicitly the number of replicas.
  • Upgraded EGMI - Eskimo Gluster Management Interface - from 0.2.0 to 0.3.0 : with better detection and resolution of detected problems, multiple bug fixes and additional problems being detected and handled automatically.
  • Shell framework improvements : with new lock management framework, common CLI utilities volume mount needs detection, better container management framework, etc. improving the overall reliability of Eskimo in production environments and significantly simplifying the development of new third-party services within Eskimo.
  • Removed host / native Kube DNS Management : Previous version of Eskimo was leveraging on "dnsmasq" to handle host-level resolution of Kubernetes services. This has proven to be very weak and fragile.
    This system is now removed and host services cannot reach Kubernetes services anymore for free.
    However, eskimo provides an "eskimo-kube-exec" command that wraps subsequent calls into a specific environment where Kubernetes services are dynamically resolved.
    Also, all command line client and utilities from CLI packages gets an environment where Kubernetes services are resolved as well.
  • Flink Python infrastructure : Python scripts and programs are now fully supported on top of Flink. Flink packages all python dependencies in a conda environment carried around Flink runtime containers.
    Python is now fully available to implement Flink jobs.
  • Docker Images versioning scheme : Eskimo services images are now versioned using docker images tags upon installation. Everytime an administrator reinstalls a service, the image tag - either in native docker image set for native services or in docker registry for kubernetes services - will be incremented.
    Whenever a service is started or restarted, it automatically uses the last tag / version available.
    A command line tool called "" enables operators or administrators to edit (customize / extend) a service docker image directly from an Eskimo Cluster node and handles the version upgrade automatically. This prevents operator from the former requirement to always rely on the eskimo framework to customize eskimo services.
  • Rationalized Prometheus service : Prometheus is now split from its exporters. Only the exporters are now installed as native cluster node services while the Prometheus Service is deployed on Kubernetes.
  • Zeppelin 0.11 custom build : Eskimo provides its own patched and custom build of Zeppelin 0.11 enabling to use the latest versions of Spark and Flink.
    Also, the ElasticSearch interpreter is fixed (custom patch) enabling to use the latest version of ElasticSearch from the ElasticSearch interpreter.
  • General improvement and bug corrections.
    • Various improvements on Status Page, removed links and replaced by relevant information on cluster and deployment.
    • Improved the way gluster mounts are handled both outside and within containers. The system is expected now to be much more robust and more resilient to containers or VMs being paused, relocated or else.
    • Refactored Operations ordering framework, enabling restarts and uninstallations to be interleaved with installations to enable much more consistent upgrades of cluster topology and addressing shortcomings of the previous version.
    • Various improvements related to the kubernetes cluster management, such as using a specific "Eskimo" namespace, scraping Kubernetes metrics to Prometheus, etc. (many others)
    • Significantly extended the set of settings available for operators / administrators to customize Eskimo services behaviour within Eskimo Community Edition.
    • Improved Eskimo "About" Box.
    • Fixed font issues and layout problems on documentation (Asciidoc).
    • Various security improvements.
    • Multiple bug fixes and plenty of small improvements.
  • Various Technical improvements
    • Upgraded Maven plugins to latest versions.
    • Replaced HTML Unit by Selenium for Web / HTML tests.
    • Introduced Spring Test framework and Spring profiles for unit tests.
    • Upgraded Spring to latest version (5.3.25). Upgraded Spring Boot to latest version (2.7.9)
    • Upgraded Apache HTTP Components to the latest version (5.2.1)
    • Upgraded most-if-not-all backend java dependencies to latest versions.
    • Introduced Strong typing on java backend for nodes, volumes, services, etc.
    • Significantly improved code coverage by unit tests.
    • Significantly improved integration test.
    • Various backend (Java and Shell) code improvements
    • Various Javascript code improvements
  • Services Version Update.
    • Upgraded base debian image used to build container from buster to bullseye
    • Gluster FS upgraded from 6.0 to 9.2
    • Apache Spark upgraded from 3.2.2 to 3.3.2
    • Apache Flink upgraded from 1.14.6 to 1.15.4
    • Apache Zeppelin upgraded from 0.10 to 0.11-eskimo-2 (custom build)
    • Upgraded whole Elastic stack from 8.1.2 to 8.5.3
    • Upgraded cerebro from 0.9.2 to 0.9.4
    • Upgraded cfssl from 1.6.1 to 1.6.3
    • Upgraded kafka version from 2.8.1 to 2.8.2
    • Upgraded kafka manager (CMAK) from to
    • Upgraded prometheus from 2.35.0 to 2.41.0 with
      • Upgraded prometheus push gateway from 1.4.3 to 1.5.1
      • Upgraded prometheus node exporter from 1.0.1 to 1.5.0
    • Upgraded Grafana from 8.5.2 to 9.3.2
    • Etcd upgraded from 3.5.2 to 3.5.7
    • Upgraded Kubernetes from 1.23.5 to 1.26.3 with
      • Introduced CRI-docker 0.3.1
      • Introduced Kube state metrics 2.8.2
      • Upgraded kube router from 1.4.0 to 1.5.3
      • Upgraded CNI Plugins from 1.1.1 to 1.2.0
      • Upgraded CoreDNS from 1.9.0 to 1.10.1
    • Upgraded Kubernetes Dashboard from 2.5.1 to 2.7.0
      • Upgrades Metrics scraper from 1.0.7 to 1.0.8

What's new in Eskimo-CE v0.4.1

Released on Oct 4, 2022

What's new ?

  • Bug Fix : Forcing kube-slave to be installed as well on nodes running kube-master since the required virtual networking infrastructure comes only with kube-slave for the time being, and it is required by kube-master and other services.
  • Services Version Update.
    • Apache Spark upgraded from 3.2.1 to 3.2.2
    • Apache Flink upgraded from 1.14.4 to 1.14.6
    • Prometheus push gateway upgraded from 1.3.1 to 1.4.2

What's new in Eskimo-CE v0.4

Released on Jul 7, 2022

What's new ?

  • Replaced the Mesos / Marathon couple by Kubernetes to orchestrate containers in the Eskimo cluster Kubernetes is now integrated and used within the Eskimo cluster to orchestrate containers and services in place of Mesos and Marathon. Each and every Eskimo feature - including the memory management model - is not adapted to work on Kubernetes.
    Eskimo operates Kubernetes entirely automatically and takes care of all the nuts and bolts to have business services - such as Kafka, ElasticSearch, Spark, Flink, etc. - all running effectively and fine-tuned to the perfection on top of Kubernetes.
    As a consequence to this transition to kubernetes, each and every trail of Mesos and Marathon usage is now removed from Eskimo.
    Eskimo distributes a vanilla yet state of the art Kubernetes stack with:
    • Kube Router instead of kube proxy along kubelets to handle all of virtual networking concerns, routing, proxying and firewalling.
    • Core DNS packaged and deployed within the Kubernetes cluster to provide Service name resolution for containers running inside Kubernetes and also outside natively on Eskimo cluster nodes.
    • Etcd deployed in multi-master mode for native high-availability
    • The Kubernetes-dashboard secured and deployed automatically upon installation. Eskimo makes it available regardless of the connection protocol (i.e. including through HTTP).
      The Login is performed automatically by Eskimo.
    • The eskimo Kubernetes layer that automates Kubernetes operation entirely and provides abstraction of location for kube services as well.
  • Integrated EGMI - Eskimo Gluster Management Interface - for Gluster Nodes Management.
    • EGMI Manages Gluster nodes cluster and takes care of registering individual gluster nodes within EGMI Managed Cluster.
    • EGMI Ensures shares are created and maintains replicas and shards (bricks) as defined by the configured strategy.
    • EGMI solves most common gluster problems automatically - rebalancing bricks upon Node down or node failure, handles brick corruption, brick being down, processes startup errors, cluster partitioning problems, etc.
    • EGMI also provides at Web User Interface to follow up upon problem resolution, monitor shares and bricks, etc.
    • EGMI runs co-located to every gluster server and relies on zookeeper for master election,
    • Eskimo automatically configures EGMI to manage Eskimo Services required gluster shares.
    • EGMI eventually makes gluster management and integration within Eskimo much more straightforward and much more reliable.
  • Eskimo Kubernetes Infrastructure. Eskimo implements a management layer 0n top of vanilla Kubernetes components and administration command lines to automate kubernetes deployment and maintenance operations. Administrators setting up or operating an Eskimo cluster do not need any knowledge about kubernetes operations, Eskimo takes care of everything.
    Some of the specificities of Eskimo's Kubernetes integration are as follows
    • The framework sets the host nodes (Eskimo cluster nodes) name resolution service (DNS) up in such a way that kubernetes services can be reached by name on hosts as well (as opposed to only within containers). Eskimo leverages on "dnsmasq" for this.
    • Whenever possible, Eskimo services are reached through the Kube Proxy (kubectl). When this is not possible (for services requiring complex rewrites that prevents usage of the Kube Proxy), services are reached using a node port.
    • All Eskimo pre-packaged services are migrated to Kubernetes. Every service setup script is adapted and optimized to make an effective usage of native Kubernetes features.
  • Introduced new "Operations Monitoring" View. Instead of showing raw backend messages, now backend operations are monitored using a dedicated view where each and every individual operations is monitored and logged on its own.
    Individual errors are reported to every operation and every operation progress is tracked independently. A dedicated log window presents every individual operation results.
    Monitoring Backend Operations is much clearer and tracking progress of individual operations as well as identifying their failures is much easier.
  • Security improvement : introduced different roles for Administrators and Users. Now Eskimo users can have either "ADMIN" role for administrators or "USER" role for users.
    Users cannot change Eskimo configuration or topology but they can use Eskimo services that don't provide administrator functionalities.
    Users are attached to either one of these roles upon definition in User configuration file.
  • Services Restart Management is significantly improved. Now services are only restarted whenever required upon topology changes (e.g. new node added in cluster, etc.).
    This makes it possible to add new nodes to the Eskimo cluster without restarting services on other nodes unless it's strictly required.
  • CLI packages. Now that all services providing cluster level components such as Kafka, Spark and Flink are deployed through kubernetes and not anymore natively on every node, specific packages providing their command line clients are provided.
    These CLI - Command Line Interface - packages are intended to be installed on every cluster node where these commands will be required.
  • Base components version update.
    Switched to OpenJDK 11 as default for all services except for Zeppelin which is kept on JDK 8 since t's buggy on anything beyond.
    Switched to scala 2.12 as default version.
  • General improvement and bug corrections.
    • Java interpreter is now available and fully functional in Apache Zeppelin.
    • A new sample notebook demonstrates how to create a Kafka Streams application from Apache Zeppelin.
    • Various improvements and bug fixes related to service detection.
    • Various improvements and bug fixes related to SSH Connections Management.
    • Various improvements and bug fixes related to Gluster shares Management on hosts as well as within containers.
    • Various improvements and bug fixes n Zeppelin sample notebooks.
  • Various Technical improvements
    • Upgraded spring framework version to Spring 5 and Spring boot to the latest corresponding version.
    • Introduced lombok for boilerplate code generation.
    • Various Javascript code improvements
  • Services Version Update.
    • Prometheus upgraded from 2.18.1 to 2.35.0
    • Prometheus node exporter upgraded from 0.18.1 to 1.0.1
    • Prometheus push gateway upgraded from 1.2.0 to 1.3.1
    • Grafana upgraded from 6.7.4 to 8.5.2
    • Apache Kafka upgraded from 2.2.2 to 2.8.1
    • Kafka manager (CMAK) upgraded from to
    • Elastic stack upgraded from 7.6.2 to 8.1.2
    • Cerebro upgraded from 0.9.2 to 0.9.3
    • Apache Flink upgraded from 1.10.1 to 1.14.4
    • Apache Spark upgraded from 2.4.2 to 3.2.1
    • Apache Zeppelin upgraded from 0.9-preview1 to 0.10.1
    • Docker registry upgraded from 2.6.2~ds1-2+b21_amd64 to 2.7.1+ds2-7_amd64
    • Introduced Etcd 3.5.2
    • Introduced Kubernetes 1.23.5 with
      • Kube Router 1.4.0
      • CNI Plugins 1.1.1
      • Image - Pause 3.6
      • CoreDNS 1.9.0
    • Introduced Kubernetes Dashboard 2.5.1 with
      • Metrics scraper 1.0.7
    • Got rid of Mesosphere Marathon upgraded from 1.8.222
    • Got rid of Apache Mesos upgraded from 1.9.1

What's new in Eskimo-CE v0.3

Released on Jul 5, 2020

What's new ?

  • Integrated marathon. Individual services - so called unique services in Eskimo terminology are now operated on Mesos with the help of Mesosphere Marathon. This is a key step forward in scaling these services and implementing increasingly High Availability as well on the Community Version of Eskimo.
    Marathon is fully supported in operating services and managing their lifecycles and fully configured with dedicated configuration screens on Eskimo thanks to the Marathon Sub-system.
    Services operated by marathon are proxied by Eskimo and the user doesn't need to have any understanding or knowledge of where they run at runtime.
  • Fixed Zeppelin ElasticSearch interpreter preventing from upgrading ElasticSearch to version 7.X Fixed Zeppelin v0.9 ElasticSearch interpreter which was not compatible with ElasticSearch version 7.x or above. The fix is implemented as a eskimo custom distribution of Zeppelin based on Zeppelin 0.9-preview1.
  • Using gluster as shared FileSystem for marathon services. Leveraging on gluster for sharing folders between marathon services and services running on node host. Due the locality abstraction provided by Marathon, no assumption of operation can be performed on node.
    Gluster is now used even on single node deployments to share storage folders between marathon services and host services.
  • Fine tuning of mesos resources . Administrators can now fine-tune the resources declared by mesos-agents on eskimo cluster nodes. This has become an even more important requirements now that most services are move to marathon / mesos.
  • Utility command to encode passwords. Provided an utility command-line program called to generate the encoded password to be stored in eskimo users definition file users.json for new users.
  • Mesos monitoring dashboard in Grafana. Leveraging on Prometheus and Grafana, Mesos activity can now be monitored in Grafana, both for Mesos Agents and the Mesos Master. An initial dashboard is provisionned with Eskimo.
  • New custom command framework. Administrators integrating eskimo services can now configure custom commands available from the status page when clicking on a service status icon in the "Nodes Status Table".
    Eskimo pre-packaged services implement access to the software services log files by leveraging on this custom commands framework.
  • Eskimo SystemD Unit Configuration file. A SystemD Unit configuration file eskimo.service is added to the distribution package along with a script aimed at installing the service on systemD with all system level installation and configuration required to enable it to run properly.
  • Base components version update.
    Switched from OpenJDK 8 to OpenJDK 11 for all services running on Java / Scala except Spark and Zeppelin support only JDK 9 (as long as Spark 3 is not out in final version)
    Switched from Stretch to Buster as base Debian Docker image.
  • Various improvements on the integrated SSH Console.
    • Using Ctrl + Shift + Left/Right to navigate back and forth between opened consoles to the various nodes
    • Supporting copy / paste using keyboard shortcuts (see user guide)
    • Consoles are now sized automatically to the available window space.
  • General improvement and bug corrections.
    • Fixed the problem preventing Flink App Master and Marathon from running on a different node than the Mesos Master by configuring properly LIB_PROCESS (mesos native library).
    • Significant improvement of gluster shares management and operation reliability.
    • Significant improvements on services monitoring, detection and management.
    • Significant improvements on processed management and monitoring in containers running several processes.
    • Implemented a custom Eskimo solution do handle removing marathon services docker containers that mesos-agents fail to remove.
    • Fixed spark wrappers such as spark-shell and spark-sql that were unable to reach the mesos cluster.
    • Significantly improve services and nodes configuration consistency enforcement.
    • Provisioning a Kibana dashboard for berka transactions (Zeppelin samples)
  • Services Development framework improvement.
    • The Service development framework now leverages on docker as well to build the various Apache Mesos distributions. Vagrant or libvirt is not required anymore.
  • Services Version Update.
    • Elastic Stack (ElasticSearch, Logstash, Kibana) upgraded to 7.6.2
    • Apache Spark upgraded to 2.4.5
    • Apache Flink upgraded to 1.10.1
    • Cerebro updated to 0.9.2
    • Kafka Manager updated to
    • Apache Zeppelin upgraded to 0.9-preview-1 (some Eskimo workarounds for known bugs still required)
    • NTP updated to 4.2.8p12 (standard debian buster version)
    • GlusterFS updated to 6.0 (standard debian buster version)
    • Zookeeper updated to 3.4.13 (standard debian buster version)
    • Prometheus upgraded to 2.18.1
      • Pushgateway exporter updated to 1.2.0
    • Grafana upgraded to 6.7.4
  • Services Integration. Following services are fully integrated and operable within Eskimo 0.3, with all the tuning, fixes and required wrappers.
    • Mesosphere Marathon 1.8.222
    • Mesos Prometheus Exporter 1.1.2

What's new in Eskimo-CE v0.2

Released on Dec 30, 2019

What's new ?

  • SSH Tunnels to reach Services Web UIs. From the eskimo main console, instead of reaching the various Web Graphical User Interfaces of the various services such as Mesos Console, Spark History Server, Kafka Manager, Cerebro, Kibana, etc. using direct accesses to the nodes and ports where these services are running, the new approach consists in letting eskimo manage SSH tunnels to reach these services wherever they are located and let the iframes within Eskimo's own UI reach these services through these SSH tunnels using proper HTTP Proxying from the Eskimo Backend.
    This enables a much tighter control on the ports to be opened on individual nodes and prevents from the need to open access from administrators machines to the internal Eskimo cluster nodes.
    It also enables a much tighter integration between service web consoles and Eskimo itself.
  • Revamped Status Page. The Nodes Status Page is now a Cluster Status Page that provides general health information and statistics about the cluster the whole cluster. The services action menu has been reimplemented in a clearer way (just click on a service status to have access to the action menu)
  • Download of images from Service Images and Packages can be downloaded from It is not anymore mandatory to build them locally with each and every Eskimo installation. Images and Packages are properly versioned and the system supports checking for updates and fetching new images when they are available.
  • Support for Microsoft Windows as execution OS for Eskimo backend. With the ability to download pre-built services packages (docker images) from now implemented, Microsoft Windows is now fully supported as execution middleware for the Eskimo backend (not the cluster nodes though, a supported Linux distribution is required on the eskimo cluster nodes).
    Running on windows however makes it mandatory to download pre-built packages from a remote repository (anyone). Building images directly from within the Eskimo User Interface is only supported when the backend Operating System is Linux.
  • Logstash is available from zeppelin container. If Logstash is installed, it is made available to the Zeppelin container using a command client reaching a command server from the logstash container.
  • Mesos Management Command Line utility. A mesos-cli command line is available from every node of the cluster to administer the mesos cluster and address missing features from mesos command line utilities such as for instance the ability to kill a framework in a failsafe manner.
  • Support for SUSE operation System. The SUSE Operating System is now fully supported on cluster nodes in addition to Red-Hat based OSes (Fedora, CentOS, etc.) and Debian based OSes (Debian, Ubuntu, etc.)
  • Collection of Zeppelin Demo Notebooks. A comprehensive collection of demo notebooks for Zeppelin is now available.
    These demo notebooks present for instance how to implement spark processes in batch and streaming mode, same for flink, how to read from and write to ElasticSearch from Spark, how to read from and write to kafka from Spark and Flink, how to use logstash to feed ElasticSearch, etc.
  • New Services Settings Edition Feature. It is now possible to edit common spark, flink, logstash, elasticsearch, kafka, etc. configuration properties or settings from the UI and inject them in the service runtime (services are automatically restarted with updated configuration when settings are saved).
    Only pre-defined configuration properties are supported in Eskimo Community Edition. One needs to acquire the enterprise edition to support defining every possible property or configuration file of every Eskimo pre-packages technology.
  • General improvement and bug corrections.
    • Much better management of SSH Connections and SSH terminals.
    • Much better support for different screen resolutions, especially lower screen resolutions.
    • Much better detection of services startup problems in SystemD unit configuration files.
    • Much better management of gluster shares. (A weakness remains on node unregistration when a node is removed from cluster. This will be solved in 0.3)
    • Better handling of messaging and notifications in a multi-user environment.
    • Improvement of the user documentation.
  • Services Development framework improvement.
    • The Service development framework now supports both libvirt and Vagrant to build the various Apache Mesos distributions.
  • General UI Improvements.
    • The services in both the Nodes Status Page and the Services Configuration Page are now accompanied with the respective product icons.
  • Services Version Update.
    • Apache Mesos upgraded to version 1.8.1
    • Apache Zeppelin upgraded to 0.9-SNAPSHOT (development version with fixes and workarounds for known bugs)
  • Services Integration. Following services are fully integrated and operable within Eskimo 0.2, with all the tuning, fixes and required wrappers.
    • Apache Flink 1.9.1

What's new in Eskimo-CE v0.1

Released on Jul 24, 2019

What's new ?

  • Docker Images Development Framework. The docker images development framework provides standards, principles and tools to enable anyone to build his own service docker images to be installed and operated by Eskimo. The initial set of Eskimo packaged services containers are implemented on top of this framework.
  • Services Installation Framework. The Services Installation framework is Eskimo's key feature enabling it to install, configure, manage, operate, move and uninstall services (their docker containers) on node. It is based on docker and a standard systemd approach for operation on nodes.
  • Master election and dependencies Management. Dependencies management between services and enforcement of dependencies between services across nodes. This enables an eskimo service developer to define it's service dependencies in a configuration file and then let eskimo enforce these dependencies, handle order of installation, order of services restart, restarting of dependent services after a service move to another node, etc.
    This module also takes care of dependent services re-configuration when topology of dependencies evolve.
  • Initial Basic Memory Management Framework. This module computes the available memory on every node and uses the memory requirement declaration of every service (configuration) to distribute a fair share of the available memory on every node to every service the node is hosting.
  • Eskimo Platform Management User Interface. This is the initial version of the Eskimo Platform Management Console Web Graphical User Interface which provides specific features, such as the SSH terminals, the nodes' configuration page, etc. as well as the embedding of third party services User Interfaces within its own UI.
  • Eskimo Setup Page and Backend. The Eskimo Setup Page and Backend implements the initial setup of the Eskimo Platform management console, where the SSH configuration used to reach the Eskimo Cluster nodes is defined as well as, for instance, the way to find the Eskimo Managed Services Docker images (download or build).
  • Eskimo Nodes Configuration Page and Backend. This is the most important feature from eskimo: the page and backend implementation of the nodes' configuration, where the topology of the cluster is defined by declaring the services to be executed on every node of the cluster. Nodes can be configured as individual nodes or range of nodes (IP addresses). This module takes care controls the dependencies enforcement and performs the installation, configuration, uninstallation, etc. of services when the configuration is applied.
  • Eskimo Nodes Status Page< and Backend. This is the Eskimo Cluster main monitoring page where the topology of the cluster is presented and monitored and where individual services states can be monitored. Actions on individual services can be carried on from there.
  • Eskimo Backend Messages Page and Backend. Messaging and monitoring feature when there are operations in progress (installation, configuration, uninstallation, etc.) on the backend. This included support for multi-user environment when another administrator triggered some operations to have all connected operators notified.
  • Eskimo Web-based SSH Terminals Feature. Eskimo SSH terminals feature, just as a plain old SSH terminal but within a web page. Eskimo enables administrators to open SSH terminals on cluster nodes directly from within the Eskimo Web GUI.
  • Eskimo Web-based SFTP Terminals Features. Web based file manager using SFTP to connect on cluster nodes. Just as a plain old file manager with file visualization but from within the Eskimo Web GUI.
  • Support for Debian-based and Red-Hat-based Linux nodes. Support for Debian-based and Red-Hat-based Operating systems on Eskimo cluster nodes (Debian, Ubuntu, Red-Hat, Fedora and CentOS for now).
  • Basic authentication and authorization and login page. Authentication and Authorization framework. For now authorization are quite trivial and authentication is based on local file system user file. This shall evolve with further versions of Eskimo.
    Eskimo EE offers impersonation of system users when executing commands (including File Manager and SSH terminal). Eskimo CE executes everything as default user.
  • Gluster FS volumes and mounts management framework. System-wide management of Gluster shares and mounts for services requiring such shared folders such as Spark, Zeppelin, etc.
  • Services Integration. Following services are fully integrated and operable within Eskimo 0.1, with all the tuning, fixes and required wrappers.
    • NTP (standard debian stretch version)
    • GlusterFS (standard debian stretch version)
    • Zookeeper (standard debian stretch version)
    • Apache Mesos version 1.7.1
    • Apache Spark version 2.4.4
    • ElasticSearch version 6.8.3
    • Elastic Logstash version 6.8.3
    • Elastic Kibana version 6.8.3
    • Zeppelin version 0.8.1
    • Cerebro version 0.8.4
    • Apache kafka version 2.2.0
    • Kafka Manager version
    • Grafana version 6.3.3
    • Gdash (Gluster Dashboard) version 0.0.a1
    • Prometheus version 2.10.0 with
      • Node Exporter version 0.18.1
      • Push Gateway version 0.8.0