Full Session Description:
JFrog Mission Control offers centralized control, management and monitoring for all your enterprise artifact assets globally.
In this training, learn how to use JFrog Mission Control to accomplish standard tasks across a multi-site topology of JFrog Artifactory. You will learn basic tasks on configuring, synchronization, and managing multiple JFrog Artifactory instances using JFrog Mission Control.
Who should attend:
JFrog Artifactory Administrators with multi-site topologies in their enterprise. This will be a hands-on course.
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.
Full Session Description:
In this class, students will learn how to leverage Artifactory to achieve high availability, scale with S3 file storage and utilize multi-push replication. In this class, we will go through in detail to understand all binary providers and how we can use those binary providers for different configurations. You will also learn how to replicate a local repository from a single source to multiple enterprise target sites simultaneously.
Students will also review the ideal configuration to start containers on 3,000 Docker hosts without any image cache in less than 10 minutes in one Amazon region.
You will also get an overview of how you can do centralized control, management and monitoring for all your enterprise artifact assets globally with the help of JFrog Mission Control. No global enterprise plan is complete without off-site disaster recovery, so we will also cover how to leverage these tools to create and manage that.
Who should attend:
Developers and DevOps engineers who are currently using Artifactory, or considering doing so, and investigating Enterprise features and global artifact management architectures
Full Session Description:
Bintray gives developers full control over how they store, publish, download, promote and distribute software with advanced features that fully automate the software distribution process. Together, JFrog Bintray and JFrog Artifactory form the only end-to-end solution for a fully automated continuous delivery pipeline in software development.
In this training, you will learn how to leverage JFrog Bintray to distribute the approved artifacts/releases from JFrog Artifactory to your customers as quickly as possible. We will demonstrate the basics of JFrog Bintray, and the best practices of using these products together. We will also showcase how administrators can guarantee the right access and entitlements and minimize security issues along with maintaining a reliable platform.
Who should attend:
Developers and DevOps engineers who are looking to do smart distribution of their artifacts with control and accuracy.
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.
This workshop will provide a hands on experience with a turnkey implementation of a scalable Jenkins as a service solution, based on CloudBees Jenkins Enterprise. The workshop will utilize the same micro-service example used in the DevOps 2.1 Toolkit workshop, walking you through the software development lifecycle using the tools and features provided by CloudBees Jenkins Enterprise. The audience will perform exercises which illustrate the distributed and scalable architecture provided by the CloudBees Jenkins Enterprise cluster.
From quickly provisioning your very own CloudBees Jenkins Enterprise Managed Master, to setting up a custom template for a built-in, ephemeral and elastic Docker based Jenkins agents, to dynamically creating Jenkins Pipeline jobs - you will have a true hands on experience with features that CloudBees Jenkins Enterprise provides based on a highly scalable Distributed Pipeline Architecture.
Who Should Attend:
Anyone interested in a highly scalable architecture for enabling continuous delivery.
Full Session Description:
While Docker has enabled an unprecedented velocity of software production, it is all too easy to spin out of control. A promotion-based model is required to control and track the flow of Docker images as much as it is required for a traditional software development lifecycle. Students will learn how to go from development, to containerization to distribution utilizing binary management promotion in a framework implemented on Jenkins Pipelines.
Who should attend:
Developers and DevOps engineers who are currently using JFrog Artifactory and Docker, and current Docker users who are considering using JFrog Artifactory as their trusted registry.
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.
Full Session Description:
This training will be divided into two parts - the first part will include some overview about JFrog Artifactory and its eco-system, basic installation notes, and HA architecture. This will be followed by introducing repositories which are the main building blocks of JFrog Artifactory, show how to configure each repository type and some best practices. Next we will show the power of JFrog Artifactory Automation tools - REST API and CLI and how to use those. We will discuss integration JFrog Artifactory offers with security protocols and configurations of users, groups and permissions and at the end how to monitor, track and understands JFrog Artifactory logs.
The second part will be focused on different technologies, build tools and CI servers commonly used with JFrog Artifactory along with some best practices and use cases. We will demonstrate the power of metadata by attaching properties to artifacts, generating build information and using JFrog Artifactory Query Language for querying and searching. This course will be a hands-on course.
Who should attend:
Developers and DevOps engineers who are looking to get an overview of JFrog Artifactory
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.
Full Session Description:
Learn how to take your JFrog Artifactory use to the next level. This training session will teach you about how to extend JFrog Artifactory in automation scenarios by mining your artifact’s metadata to release faster. We will cover Artifactory mechanisms like CLI, AQL, REST API and User plug-ins. Attendees will learn to use CLI, AQL, REST api and User plug-ins to manage an artifact's lifecycle.
Who should attend:
Developers, QA, and DevOps engineers who have some experience using JFrog Artifactory and are familiar with basic JFrog Artifactory concepts and usage.
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.; Software required: Java 8, CURL and Groovy 2.4+;
Full Session Description:
In this hands-on class, students will start by packaging a sample application and publishing it to Artifactory. They will then learn how to use JFrog Xray to index the application and exploit the component graph to scan for known vulnerabilities and other issues. Next they will use Snyk to retrieve the contextual information about the vulnerability itself, and will have a hands-on session actually exploiting the vulnerabilities on this application. The vulnerability exploiting section will be followed by using Snyk to remediate the vulnerabilities, then re-publish the application to Artifactory and lastly see vulnerabilities were removed with JFrog Xray. This is a half day class.
Who should attend:
Application Security Engineers, DevOps engineers and Developers who are looking to get hands-on experience of using JFrog Xray and Snyk to detect and fix vulnerabilities in their open source dependencies and binaries.
Technical Requirements:
Please bring your own laptop, power cables, USB devices etc.Full Session Description:
This course will provide both a theoretical and practical introduction to a complete DevOps solution for C/C++ projects. The topics will include
Who should attend:
C/C++ developers, Builders, Architects, Project Managers, DevOps engineers
Technical Requirements:
By running Artifactory on DC/OS, it is possible to easily and quickly create a scalable continuous delivery pipeline for containers. In this talk, we show how Artifactory can easily be deployed in a highly available configuration and plumbed into a continuous deployment process using Jenkins.
Moving to containerised infrastructure provides significant benefits to a modern enterprise technology organisation, providing a clean abstraction between operators and developers through a set of APIs and services. This abstraction allows developers to easily build and maintain their own continuous deployment pipelines, while infrastructure operators can concentrate on providing a fast, efficient and dynamic environment for them to deploy onto.
These pipelines string together components of a production environment from code repository to artifact store to continuous integration system, eventually deploying to a production cluster of machines. Typically artifacts will be build, stored and retrieved multiple times - and on a large cluster, may be pulled down thousands of times.
In this presentation, Mesosphere engineers show how you can easily set up continuous deployment pipelines for hundreds of developers that scale up to thousands of nodes. We will demonstrate how you can quickly and easily deploy a highly available installation of Artifactory onto the Datacenter Operating System (DC/OS) in order to provide a robust artifact store for developers to store and deploy build artifacts from. We will then integrate Artifactory with our Jenkins continuous integration service to continuously deploy a workload onto DC/OS.
Running containers as a part of your computing infrastructure is the new trend and follows what companies like Twitter, Facebook, and Google have been doing for years. However, the traditional approaches to building Docker containers are generally very different from the approach taken by the tech giants. While it’s easy to use base images you find on DockerHub to produce an app image, it leads to larger images (sometimes with 98% extraneous files!) and introduces unused components into your app.
This talk will demonstrate a different approach to building Docker containers the way Google does, using its open source Bazel build system to containerize your app, and not your operating system.
C and C++ languages together have one of the largest, if not the largest, shares among the programming languages, and are used in many important industries: embedded, finance, research, robotics, gaming, etc. However, those languages have traditionally lacked a widely used package manager, and then, every project deploys their own custom tools for managing code reuse, as well as for project automation and continuous integration, with very high costs in terms of money, time and human resources.
Now with Artifactory, its support for the Conan C/C++ package manager, and the integration with Jenkins CI, it is possible to define an effective and automated DevOps process for C and C++ projects, that will efficiently manage binary creation, testing and reutilization, so important in typical huge projects developed in C and C++, as well as binary generation and reuse for multiple platforms, compilers and configurations.
First, the challenges of Package Management, Continuous Integration and DevOps for C/C++ will be introduced. Then, it will introduce the basics of Conan C/C++ package manager, how it is integrated with Artifactory, and how can it be used to generate and reuse C/C++ packages. Then, how to deploy a Continous Integration system with Jenkins CI, and integrate everything together. A simple, but complete setup will be explained and demo’ed.
We’ll show why the staging server methodology is incompatible with microservice architecture. We’ll cover why containers unlock the key to ephemeral testing environments that eliminate huge feedback bottlenecks and cripple development. We’ll show how asset and container management is critical.
The old staging methodology is broken for modern development. In fact, the staging server is left over from when we built monolithic applications. Find out why microservice architectures are driving ephemeral testing environments & why every sized dev shop should deliver true continuous deployment.
Staging servers slow down development with merge conflicts, slow iteration loops, and manhour intensive processes. To build better software faster containers and infrastructure as code are key in 2017. Dev Ops professionals miss this talk at their own peril.
This talk will compare the difference between having a tools team of 80+ people to support and run your CI/CD infrastructure with the challenges of working at a new startup with no infrastructure or tools team.
NetApp integration with Jfrog Artifactory enables the developers to manage and instantly collaborate the build artifacts and repositories with remote sites. Integrating JFrog Artifactory with Jenkins plugin allow to accelerate the CI/CD process using NetApp persistent storage and REST APIs.
The next generation and cloud native applications require a consolidated and structured form of managing the builds and artifacts. Jfrog Artifactory provides solutions to automate software package management. This also allows to speed up development cycles using binary repositories. The number of binary repositories tend to get bigger in size during the software development lifecycle (SDLC) of multiple web and mobile applications. The growth in the storage to store the different versions of artifacts and builds drives higher cost which is imperative in a cloud-based environment. NetApp storage efficiencies reduces the storage footprint of large repositories managed by JFrog in on-premise and hyperscaler based cloud setups. StorageGRID from NetApp is a high-volume object store with large storage capabilities over S3 and provides collaboration of builds among global sites. The NetApp-Jfrog integration enables instantaneous copies of the builds and artifacts to remote sites using cheaper and deeper object storage, thus providing more storage for less cost and manageability overhead. Key Takeaways – 1) Single pane of management from JFrog for consistent and high availability of data in local sites. 2) Instantaneous metadata updates across remote sites for effective collaboration using NetApp-JFrog integration. 3) NetApp-JFrog integration provides the desired performance and growing capacity and storage efficiency requirements for the build repositories
Adobe’s successful transition from selling packaged products to a cloud-based subscription model is closely tied to our adoption of DevOps principles and investment in new technology. The radical transformation of our development practices required replacement of disparate, legacy services and custom tools. From consolidation of 40+ disparate repositories (including Nexus, custom, etc) to hosting over 3 million artifacts in less than two years, Artifactory’s role in this transformation is significant. While leveraging Artifactory’s enterprise level, centrally managed yet distributed, highly available out of the box solution, Adobe also benefits from Artifactory’s CI/CD integrations, component analysis and security (Xray + Aqua), distribution (Bintray) and customization for self-service tools and automations (REST API and User Plugins).
Dave Meurer will present Black Duck Hub’s integration with both JFrog Artifactory Pro and Xray, providing details on the customer value proposition, uses cases, and demonstrations for each integration. Yes, that's 3 DEMOs for the price of 1!!
Combining the power of JFrog Xray and JFrog Artifactory with Black Duck Hub allows organizations to eliminate open source security vulnerabilities, meet license compliance obligations and limit operational risk.
Black Duck’s integration with JFrog Artifactory and Xray allows organizations to manage both the build output scanning and repository inspection at different levels in the SDLC. At the repository level with Artifactory Pro or outside the formal SDLC process with Xray.
Agenda:
- Recent Vulnerability Example
- Black Duck Overview
- 3 Demos of 3 separate Black Duck + JFrog integrations.
Do you want to hear an amazing story?
About a traditional bank becoming agile
About a standard CD service for ~1000 teams world wide?
About cool new techniques and cutting edge technology?
Join me in this session! You will not be dissapointed!
ING, a global financial institution offering retail and wholesale banking services to customers in over 40 countries, has established an innovative continuous delivery practice that helped transform them into a leading Fintech company. In this session, ING will share its Continuous Delivery as a Service (CDaaS) concept and its journey that sped up the delivery process from weeks to hours. You will learn about the challenges that ING encountered and solved in this journey and the next milestones in this global continuous delivery journey, including how they are leveraging JFrog Artifactory.
During the past year, Microsoft has made significant contributions to the Open Source community. We have open sourced tools such as Visual Studio Code, PowerShell Core, NET Core, and even added support for Bash to Windows 10. In this session I will teach you how you can use these open source tools in your dev and production environments to implement DevOps best practices such as source control and continuous update in conjunction with Microsoft’s Azure cloud. During this session you will learn about CI/CD and how to run a complete pipeline using JFrog Artifactory, Azure, Docker Swarm on Azure, and Jenkins to launch a website through a Jenkins pipeline using JFrog's plugin. Come on out and learn how to safely build and deploy a website in a Docker container, with version control, JFrog, and CD included, in just 45 minutes!
Open source modules, such as those pulled from npm, RubyGems and maven, are undoubtedly awesome. However, they also represent an undeniable and massive risk. You’re introducing someone else’s code into your system, often with little or no scrutiny. The wrong package can introduce severe vulnerabilities into your application, exposing your application and your users data.
The talk will use a sample application, Goof, which uses various vulnerable dependencies, which we will exploit as an attacker would. For each issue, we’ll explain why it happened, show its impact, and – most importantly – see how to avoid or fix it.
Businesses are in a race for continuous innovation to deliver applications with amazing customer experience to attract, engage and retain customers. Developers are struggling with a dynamic application environment which requires a new, comprehensive approach to traditional monitoring and the need for real-time analytics and visibility across their entire application stack.
This presentation will highlight how Sumo Logic unifies logs and metrics combined with its advanced machine data analytics to help its customers deliver that great user experience and also highlight our joint solution with Jfrog that offers users to access advanced analytics and metrics with out-of the box dashboards directly from JFrog Artifactory.
There is a lot of buzz around polyglot development, and with good reason. Polyglot, especially when coupled with microservices, enables developers to build services faster while using the best tools for the job. But how do you enable new technologies when your organization already relies on proven infrastructure? How do you provide language-native tooling for polyglot developers without reinventing the wheel every time?
In this talk we’ll learn about the journey Netflix made while transitioning from being predominantly JVM-based to fully embracing polyglot, and the lessons we learned in the process. We’ll show how we were able to leverage much of our existing infrastructure while maintaining (near) native ergonomics, and how Docker was used to tie everything together.
As in a good Greek Tragedy, scaling devops to big teams has 3 stages and usually end badly. In this play (it’s more than a talk!) we’ll present you with Pentagon Inc, and their way to scaling devops from a team of 3 engineers to a team of 100 (spoiler – it’s painful!)
In this talk we’ll take you to a scaling journey, from 3 developers to a 100. We’ll talk about the challenges each milestone in this growth brings, both technological and methodological, and how to solve those challenges using the right mix of people, the right selection of tools and the correctly crafted process. The speakers excel in the different aspects of this triangle and went through this journey (more than once) themselves. And the fun and entertaining presentation as a Greek tragedy can’t hurt, can it?
Economic reality and technology are shortening business cycles. What this means for companies, new and established, is that the windows of opportunity for capitalizing on market trends are shrinking every year.
The solution is to get applications out faster and better. Facebook’s mobile application’s two week release cycle has set the pace for other companies. We’ve heard from older, more traditional companies that this pressure is forcing strange and unnatural behaviors that are causing their existing release processes to break. The number one source of friction is the database change process.
At Datical, we believe that database changes should be treated as tier-one artifacts in your release cycle. To truly achieve the necessary visibility and traceability with database changes such that they can be managed as part of the larger release process, it’s necessary to store those database changes in JFrog Artifactory. Doing so allows our development and operations teams to provide more nimble support to the business. In turn, it also enhances IT’s ability support the strategic needs of the business.
Let’s take a step back and remember why we use Artifactory in the first place. Simply put, we need a single source of the truth. Of course, we have that with our source code control, but that’s not the best place to put our applications ready for release. Artifactory provides a mechanism for anyone in the organization to request the latest released version of an application and also include any dependencies required by the new application version.
But, you are not including the database changes. And oversight is causing you serious problems.
If you’re including your database as part of your architecture, why aren’t you including it as an artifact? Simply put, your application releases need to be able to do the following: 1. Include all necessary components to run the application on a freshly provisioned environment (For development and testing purposes. Think CLOUD.) 2. Provide an upgrade for a previously deployed version to the current release (For production with existing data that must be protected) 3. Provide a rollback to a previously deployed version (Again, for production with existing data that must be protected)
Obviously cramming SQL scripts into Artifactory isn’t going to meet these goals. Maybe having a single SQL file that will create an empty schema will meet our first requirement. But, it will fail for upgrades (number two) and rollbacks (number three).
The next step we see our customers take (before using Datical DB), is to have incremental upgrade scripts. The challenge with this approach is that it is impossible to automate. A person will have to verify which SQL script has or has not been run and take action. Moreover, the rollback requirement requires a person to verify that the rollback SQL exists and will work as expected. Thus, a seemingly well intentioned edict has create a huge overhead for our development and operations teams. We have solved this problem for very large companies with a combination of JFrog Artifactory and Datical DB.
The first step our customers take is using Datical DB to orchestrate their database changes. We can gather these changes from a variety of sources. Datical DB can consume SQL scripts, take deltas of existing database schemas or provide wizards to create change. Once those changes have been “Daticalized”, the changes can be pushed to target environments with the same toolchain you use to push your application binaries. Using Jenkins to compile source code and put the binary into Artifactory? No problem! Just use the Datical DB plugin for Jenkins. Using an Application Release Automation (ARA) solution with Artifactory? Again, no problem! We support all ARA providers.
All of this helps you go much faster, but you need a solution that allows you to go fast without increasing risk. That is where Datical DB forecast comes in. Datical knows the state of all your environments, which artifacts have been deployed where, and it will forecast whether a change will work in that environment (you can’t add a column to a table that doesn’t exist yet in pre-prod!). Datical forecast also makes sure that your artifacts follow corporate best practices, standards and governance without requiring a manual review. Are developers following naming conventions? No creating changes without comments!
Once your Datical DB database changes are stored in Artifactory, you will find that you can deploy your entire application stack to any environment. You will see an increase in speed coupled with reduced risk and lower costs to manage deployments. Both of these decrease resources we need to dedicate to the change process and maximize our customer satisfaction through timely releases that provide enhance the user experience.
When we talk about automation in software development, we immediately think of automated builds and deployments. We may also be using scripts to help make our daily work easier. But this is really just the beginning of the rise of the machines. I show you how leading developers in our industry are using open source and commercial tools for automating much more. They’ve got “robots” for monitoring production servers, updating issues, supporting customers, reviewing code, setting up laptops, doing development reporting, conducting customer feedback – even automating daily standups. In what instances is it useful to automate? In what cases does it not make sense? Automation prevents us from having to do the same thing twice, helps us to work better together, reduces workflow errors and frees up time to write production code. Plus, as it turns out, spending time on automation is fun! Don’t be afraid of robots in software development, embrace them! Even if I save you just half an hour a week, this talk will be a beneficial investment of your time.
Things are everywhere and connected to the Internet - this is IoT. When you are building for cloud, server, and mobile technologies, DevOps is a great way to ensure continuous high quality updates and keeping your developers happy and productive. But how do you do with IoT devices?
Aligning these taxonomies between your tools can greatly simplify the integration of the various stages of your deployment pipeline, and make setting up new pipelines for new projects a much simpler process.
Topics covered in this session: - Repository layouts in Artifactory - Use of standard build frameworks to enforce conventions - Self-describing source repositories - Taxonomy requirements of various types of tools, how to fill in the gaps where needed - Using Artifactory as an abstraction layer to tie your pipeline together - Making your pipeline more opinionated
Installing/maintaining/upgrading Artifactory by CD pipeline is crucial in large HA clusters. This talk will showcase this automation using Chef recipes, highlighting using Chef vault for secrets, and a Chef kitchen environment for testing
A demo using Chef kitchen will be shown publishing a Chef recipe to Artifactory and then installing Artifactory using that recipe, retrieved from Artifactory! If time permits, I will also show the recipe to install Mission Control and the recipe to add instances to it.
Though Artifactory is primarily used as a repository for build artifacts, using its replication features, it could be used as a general-purpose platform for distributing variety of files needed across data-centers/cloud-regions for DevOps requirements.
The popular use of Artifactory is to integrate it with the the build process so the build artifacts will be available for deploying into multiple environments and for post-build uses such as building Docker images and AMIs later. However, Artifactory can manage any type of file and that could be leveraged to build a file sharing hub to facilitate the movement of files in a multi-data center/multi-region cloud environment.
The talk would cover these topics with related demo: - How a file sharing hub that spans multiple data centers is setup using the simple replication feature of Artifactory. - Describe use-cases of such a platform and the kinds of files managed, besides the build artifacts: configuration templates, raw data files for data systems, packages related to third-party applications, backup files etc. - Strategies to make the configuration management and orchestration tools location aware for downloading files from related Artifactory instance on the file sharing hub. Ansible will be used as the reference configuration management tool.
In this session, you will learn a variety of ways to run and use containers in your organization, for both Linux and Windows applications. Container technology allows you to achieve greater density on your hosts, reduce conflicts between dev/test/prod environments and increase deployment speed. You can easily get started using Docker containers on your workstation with Docker for Windows (or Mac), bring those containers to your datacenter or cloud provider (either on IaaS or a container service) and then deploy them at scale using Docker swarms or other orchestrators.
After this session, you will understand the different options you have for using containers on-prem or in the cloud, how to containers can be deployed at scale on a cluster of host machines, how to get started with Azure Container Services and how tools like jFrog Artifactory can be helpful in your deployment scenarios. You will also learn the key differences between Windows Server and Hyper-V containers on Server 2016.
Devops is usually viewed from a traditional perspective of a collaboration of Dev, Ops and QA, driven by the change in Culture, People and Process. But how do you know where you stand and where to move? As in almost any field, data and metrics give you the gauges and instruments. In this talk we’ll talk about the key measurements for the DevOps transformation process and provide you with 3 metrics you can start measuring tomorrow.
Rapid change creates new challenges in security. Every change can introduce new vulnerabilities. Software gets built from artifacts with known problems. Frequent changes make it hard to audit what is in each build, and frequent deployments make it hard to know what is running in production.
I present a secure continuous delivery pipeline. This talk covers a set of tools to automate security and enable safe, rapid change. I will draw from my experience building and integrating security tools into a continuous delivery pipeline at PagerDuty, a software-as-a-service vendor with more than 20 production deployments per day. The presentation covers tools from multiple vendors and teaches you how to build and secure your own continuous delivery pipeline.
Building and delivering machine images to your customers can be a slow and painful process if you don't have the right tools. On top of that, you also have to worry about keeping Dev and Ops synchronized as you iterate through your machine image design! This talk will show you how a DevOps team at Teradata used Artifactory in concert with Packer, Vagrant, and Ansible within their CI/CD pipeline to speed up machine image development and deploy new releases to multiple platforms faster!
Implementing CD/CI DevOps culture using the Git, Jenkins and jfrog Artifactory.
This is a story of how we worked with a startup who manually managed a continuous delivery pipeline to 4 cloud platforms. Together, we investigated to find a suitable solution and ended up leveraging existing git repositories and creating new ones to abstract the actions relevant for continuous delivery: Create VMs > Upload installation files > Run Installation > package > test > publish > destroy VMs
This solution is written in Node.js and connected to Jenkins for pipeline management.
The Tripwire build system is an event driven, complex pipeline architecture. The pipeline configurator recognizes changes to the dependency graph - aka the pipeline – and updates the event structure by adding or removing finished build triggers as needed. The configurator essentially follows around the developers and updates the pipeline as needed. When a developer adds a new module as a dependency, the dependency graph is updated with the new module (and all of its dependency modules) and then adds the new build triggers. If a developer changes a dependency constraint from dynamic to fixed, the build trigger is no longer needed so the configurator removes it automatically.
The pipeline configurator is one tool in a suite of dependency management system in use at Tripwire. Success of this dependency management system is dependent upon a partnership between the SCM team and the development teams.
This presentation describes how the build system architecture, the dependency management system, and unique partnership with the development teams has led to a set of 5 highly automated services provided to the business by the SCM team.
In the continuous integration cycle, developers need to quickly deploy new application builds in a dynamically changing infrastructure that should provide a true replica of the production environment. Typical solutions lead to trade offs between costly dedicated static environments with brittle rendering of production workload or painfully slow and complex new infrastructure setup. Is DevOps yet another pipe dream?
In this presentation we will uncover a cost effective, fast and scalable approach of delivering and validating applications, combining the power of pipeline tools, JFrog Artifactory, XRay, and Quali’s Sandboxes platform to provide an end to end automation solution. We will also run a demo.
Debugging applications in production is like being the detective in a crime movie where you are also the murderer. Especially with microservices. Especially with containers. Especially in the cloud. Trying to see what’s going on in a production deployment at scale is impossible without proper tools! Google has spent over a decade deploying containerized Java applications at unprecedented scale and the infrastructure and tools developed by Google and JFrog have made it uniquely possible to manage, troubleshoot, and debug, at scale.
Join this session to see how you can diagnose and troubleshoot production issues w/ the insight provided by JFrog Artifactory & Google Cloud Platform, as well as out of the box Kubernetes tools.
VMware Professional Services writes custom code for hundreds of customers every year. After automating the CI/CD pipeline all the way to final customer delivery, we discovered we still had a big, manual step at the end of the delivery process. We would manually upload builds and documentation to an FTP server, create an account for the customer, and then try to assist them through the process of retrieving their builds. This process was slow, painful, and error-prone.
Our current automated solution is a cloud hosted Artifactory instance hosted by JFrog. Our automated process uses the Artifactory REST API to create a new repository and new user accounts for customers. One the repository exists, new builds can be published and retrieved by our customers, some of whom tie their own CI/CD pipelines to our output builds.
This represents cross-company automated CI/CD pipelines, which is an amazing solution that allows companies to collaborate with external contractors while still maintaining the security of their internal environments.
Perhaps the best part of it is the solution isn’t particularly complex or difficult to understand. Anyone with a basic knowledge of Artifactory and REST APIs can understand the techniques presented in this talk and adapt them for their own usage quite easily.