Four non-traditional paths to a cloud career (and how to navigate them)

One thing I love about cloud is that it’s possible to succeed as a cloud engineer from all kinds of different starting points. It’s not necessarily easy; our industry remains biased toward hiring people who check a certain set of boxes such as having a university computer science degree. But cloud in particular is new enough, and has such tremendous demand for qualified talent, that determined engineers can and do wind up in amazing cloud careers despite coming from all sorts of non-traditional backgrounds.But still – it’s scary to look at all the experienced engineers ahead of you and wonder “How will I ever get from where I am to where they are?”A few months ago, I asked some experts at Google Cloud to help me answer common questions people ask as they consider making the career move to cloud. We recorded our answers in a video series called Cracking the Google Cloud Career that you can watch on the Google Cloud Tech YouTube channel. We tackled questions like…How do I go from a traditional IT background to a cloud job?You have a superpower if you want to move from an old-school IT job to the cloud: You already work in tech! That may give you access to colleagues and situations that can level up your cloud skills and network right in your current position. But even if that’s not happening, you don’t have to go back and start from square one. Your existing career will give you a solid foundation of professional experience that you can layer cloud skills on top of. Check out my video to see what skills I recommend polishing up before you make the jump to cloud interviews:How do I move from a help desk job to a cloud job?The help desk is the classic entry-level tech position, but moving up sometimes seems like an insurmountable challenge. Rishab Kumar graduated from a help desk role to a Technical Solutions Specialist position at Google Cloud. In his video, he shares his story and outlines some takeaways to help you plot your own path forward.Notably, Rishab calls out the importance of building a portfolio of cloud projects: cloud certifications helped him learn, but in the job interview he got more questions about the side projects he had implemented. Watch his full breakdown here:How do I switch from a non-technical career to the cloud?There’s no law that says you have to start your tech career in your early twenties and do nothing else for the rest of your career. In fact, many of the strongest technologists I know came from previous backgrounds as disparate as plumbing, professional poker, and pest control. That’s no accident: those fields hone operational and people skills that are just as valuable in cloud as anywhere else. But you’ll still need a growth mindset and lots of learning to land a cloud job without traditional credentials or previous experience in the space. Google Cloud’s Stephanie Wong came to tech from the pageant world and has some great advice about how to build a professional network that will help you make the switch to a cloud job. In particular, she recommends joining the no-cost Google Cloud Innovators program, which gives you inside access to the latest updates on Google Cloud services alongside a community of fellow technologists from around the globe.Stephanie also points out that you don’t have to be a software engineer to work in the cloud; there are many other roles like developer relations, sales engineers and solutions architects that stay technical and hands-on without building software every day.You can check out her full suggestions for transitioning to a tech career in this video:How do I get a job in the cloud without a computer-related college degree?No matter your age or technical skill level, it can be frustrating and intimidating to see role after role that requires a bachelor’s degree in a field such as IT or computer science. I’m going to let you in on a little secret: once you get that first job and add some experience to your skills, hardly anybody cares about your educational background anymore. But some recruiters and hiring managers still use degrees as a shortcut when evaluating people for entry-level jobs.Without a degree, you’ll have to get a bit creative in assembling credentials. First, consider getting certified. Cloud certifications like the Google Cloud Associate Cloud Engineer can help you bypass degree filters and get you an interview. Not to mention, they’re a great way to get familiar with the workings of your cloud. Google Cloud’s Priyanka Vergadia suggests working toward skill badges on Google Cloud Skills Boost; each skill badge represents a curated grouping of hands-on labs within a particular technology that can help you build momentum and confidence toward certification.Second, make sure you are bringing hands-on skills to the interview. College students do all sorts of projects to bolster their education. You can do this too – but at a fraction of the cost of a traditional degree. As Priyanka points out in this video, make sure you are up to speed on Linux, networking, and programming essentials before you apply:No matter your background, I’m confident you can have a fulfilling and rewarding career in cloud as long as you get serious about these two things:Own your credibility through certification and hands-on practice, andBuild strong connections with other members of the global cloud community.In the meantime, you can watch the full Cracking the Google Cloud Career playlist on the Google Cloud Tech YouTube channel. And feel free to start your networking journey by reaching out to me anytime on Twitter if you have cloud career questions – I’m happy to help however I can.Related ArticleShow off your cloud skills by completing the #GoogleClout weekly challengeComplete the weekly #GoogleClout challenge and show off your cloud skillsRead Article
Quelle: Google Cloud Platform

Pro tools for Pros: Industry leading observability capabilities for Dataflow

Dataflow is the industry-leading unified platform offering batch and stream processing. It is a fully managed service that comes with flexible development options (from Flex Templates & Notebooks to Apache Beam SDKs for Java, Python and Go) and a rich set of built-in management tools. It comes with seamless integrations with all Google Cloud products, such as Pub/Sub, BigQuery, VertexAI, GCS, Spanner, and BigTable, as well as third-party services and products, such as Kafka and AWS S3, to best meet your data movement use cases.While our customers value these capabilities, they continue to push us to innovate and provide more value as the best batch and streaming data processing service to meet their ever-changing business needs. Observability is a key area where the Dataflow team continues to invest more based on customer feedback. Adequate visibility into the state and performance of the Dataflow jobs is essential for business critical production pipelines. In this post, we will review Dataflow’s  key observability capabilities:Job visualizers – job graphs and execution detailsNew metrics & logsNew troubleshooting tools – error reporting, profiling, insightsNew Datadog dashboards & monitorsDataflow observability at a glanceThere is no need to configure or manually set up anything; Dataflow offers observability out of the box within the Google Cloud Console, from the time you deploy your job. Observability capabilities are seamlessly integrated with Google Cloud Monitoring and Logging along with other GCP products. This integration gives you a one-stop shop for observability across multiple GCP products, which you can use to meet your technical challenges and business goals.Understanding your job’s execution: job visualizersQuestions: What does my pipeline look like? What’s happening in each step? Where’s the time spent?Solution: Dataflow’s Job graph and Execution details tabs answer these questions to help you understand the performance of various stages and steps within the jobJob graph illustrates the steps involved in the execution of your job, in the default Graph view. The graph gives you a view of how Dataflow has optimized your pipeline’s code for execution, after fusing  (optimizing) steps to stages. TheTable view informs you more about each step and the associated fused stages and time spent in each step and their statuses as the pipeline continues execution. Each step in the graph displays more information, such as the input and output collections and output data freshness; these help you analyze the amount of work done at this step (elements processed) and the throughput for it.Fig 1. Job graph tab showing the DAG for a job and the key metrics for each stage on the right.Execution Details has all the information to help you understand and debug the progress of each stage within your job. In the case of streaming jobs, you can view the data freshness of each stage. The Data freshness by stages chart includes anomaly detection: it highlights “potential slowness” and “potential stuckness” to help you narrow down your investigation to a particular stage. Learn more about using the Execution details tab for batch and streaming here.Fig 2. The execution details tab showing data freshness by stage over time, providing anomaly warnings in data freshness.Monitor your job with metrics and logsQuestions:  What’s the state and performance of my jobs? Are they healthy? Are there any errors? Solution:  Dataflow offers several metrics to help you monitor your jobs. A full list of Dataflow job metrics can be found in our metrics reference documentation. In addition to the Dataflow service metrics, you can view worker metrics, such as CPU utilization and memory usage. Lastly, you can generate Apache Beam custom metrics from your code.Job metrics is the one-stop shop to access the most important metrics for reviewing the performance of a job or troubleshooting a job. Alternatively, you can access this data from the Metrics Explorer to build your own Cloud Monitoring dashboards and alerts. Job and worker Logs are one of the first things that you can look at when you deploy a pipeline. You can access both these log types in the Logs panel on the Job details page. Job logs include information about startup tasks, fusion operations, autoscaling events, worker allocation, and more. Worker logs include information about work processed by each worker within each step in your pipeline.You can configure and modify the logging level and route the logs using the guidance provided in our pipeline log documentation. Logs are seamlessly integrated into Cloud Logging. You can write Cloud Logging queries, create log-based metrics, and create alerts on these metrics. New: Metrics for streaming JobsQuestions: Is my pipeline slowing down or getting stuck? I want to understand how my code is impacting the job’s performance. I want to see how my sources and sinks are performing with respect to my jobSolution: We have introduced several new metrics for Streaming Engine jobs that help answer these questions. Notable metrics are listed below. All of these are now instantly accessible from the Job metrics tab.The engineering teams at the Renault Group have been using Dataflow for their streaming pipelines as a core part of their digital transformation journey.”Deeper observability of our data pipelines is critical to track our application SLOs,”said Elvio Borrelli, Tech Lead – Big Data at the Renault Digital Transformation & Data team. “The new metrics, such as backlog seconds and data freshness by stage, now provide much better visibility about our end-to-end pipeline latencies and areas of bottlenecks. We can now focus more on tuning our pipeline code and data sources for the necessary throughput and lower latency”.To learn more about using these metrics in the Cloud console, please see the Dataflow monitoring interface documentation.Fig 3. The Job metrics tab showing the autoscaling chart and the various metrics categories for streaming jobs.To learn how to use these metrics to troubleshoot common symptoms within your jobs, watch this webinar on Dataflow Observability: Dataflow Observability, Monitoring, and Troubleshooting Debug job health using Cloud Error ReportingProblem: There are a couple of errors in my Dataflow job. Is it my code, data, or something else? How frequently are these happening?Solution: Dataflow offers native integration with Google Cloud Error Reporting to help you identify and manage errors that impact your job’s performance.In the Logs panel on the Job details page, the Diagnostics tab tracks the most frequently occurring errors. This is integrated with Google Cloud Error Reporting, enabling you to manage errors by creating bugs or work items or by setting up notifications. For certain types of Dataflow errors, Error Reporting provides a link to troubleshooting guides and solutions.Fig 4. The diagnostics tab in the Log panel displaying top errors and their frequency.New: Troubleshoot performance bottlenecks using Cloud ProfilerProblem: What part of my code is taking more time to process the data? What operations are consuming more CPU cycles or memory?Solution: Dataflow offers native integration with Google Cloud Profiler, which lets you profile your jobs to understand the performance bottlenecks using CPU, memory, and I/O operation profiling support.Is my pipeline’s latency high? Is it CPU intensive or is it spent time waiting for I/O operations? Or is it memory intensive? If so, which operations are driving this up? The flame graph helps you find answers to these questions. You can enable profiling for your Dataflow jobs by specifying a flag during job creation or while updating your job. To learn more see the Monitor pipeline performance documentation.Fig 5. The CPU time profiler for showing the flame graph for the Dataflow job.New: Optimize your jobs using Dataflow insightsProblem: What can Dataflow tell me about improving my job performance or reducing its costs?Solution: You can review Dataflow Insights to improve performance or to reduce costs. Insights are enabled by default on your batch and streaming jobs; they are generated by auto-analyzing your jobs’ executions.Dataflow insights is powered by the Google Active Assist’s Recommender service. It is automatically enabled for all jobs and is available free of charge. Insights include recommendations such as enabling autoscaling, increasing maximum workers, and increasing parallelism. Learn more about Dataflow insights in the Dataflow Insights documentation.Fig 6. Dataflow Insights show up in  the Jobs overview page next to the active jobs.New: Datadog Dashboards & Recommended MonitorsProblem: I would like to monitor Dataflow in my existing monitoring tools, such as Datadog.Solution: Dataflow’s metrics and logs are accessible in observability tools of your choice, via Google Cloud Monitoring and Logging APIs. Customers using Datadog can now leverage the out of the box Dataflow dashboards and recommended monitors to monitor their Dataflow jobs alongside other applications within the Datadog console. Learn more about Dataflow Dashboards and Recommended Monitors in their blog post on how to monitor your Dataflow pipelines with Datadog.Fig 7. Datadog dashboard monitoring Dataflow jobs across projectsZoomInfo, a global leader in modern go-to-market software, data, and intelligence, is partnering with Google Cloud to enable customers to easily integrate their business-to-business data into Google BigQuery. Dataflow is a critical piece of this data movement journey.“We manage several hundreds of concurrent Dataflow jobs,” said Hasmik Sarkezians, ZoomInfo Engineering Fellow. “Datadog’s dashboards and monitors allow us to easily monitor all the jobs at scale in one place. And when we need to dig deeper into a particular job, we leverage the detailed troubleshooting tools in Dataflow such as Execution details, worker logs and job metrics to investigate and resolve the issues.”What’s NextDataflow is leading the batch and streaming data processing industry with the best in class observability experiences. But we are just getting started. Over the next several months, we plan to introduce more capabilities such as:Memory observability to detect and prevent potential out of memory errors.Metrics for sources & sinks, end-to-end latency, bytes being processed by a PTransform, and more.More insights – quota, memory usage, worker configurations & sizes.Pipeline validation before job submission.Debugging user-code and data issues using data sampling.Autoscaling observability improvements.Project-level monitoring, sample dashboards, and recommended alerts.Got feedback or ideas? Shoot them over, or take this short survey.Getting StartedTo get started with Dataflow see the  Cloud Dataflow quickstarts.To learn more about Dataflow observability, review these articles:Using the Dataflow monitoring interfaceBuilding production-ready data pipelines using Dataflow: Monitoring data pipelinesBeam College: Dataflow MonitoringBeam College: Dataflow Logging Beam College: Troubleshooting and debugging Apache Beam and GCP Dataflow
Quelle: Google Cloud Platform

In Case You Missed It: Docker Community All-Hands

That’s a wrap! Community All-Hands has officially come to a close. Our sixth All-Hands featured over 35 talks across 10 channels — with topics ranging from “getting started with Docker” to running machine learning on AI hardware accelerators.

As always, every channel was buzzing with activity. Your willingness to jump in, ask questions, and help others is what the Docker community’s all about. And we loved having the chance to chat with everyone directly! 

Couldn’t attend our recent Community All-Hands event? We’ll cover some important announcements, interesting presentations, and more that you missed.

Docker CPO looks back at a year of developer obsession

Headlining Community All-Hands were some important announcements on the main stage, kicked off by Jake Levirne, our Head of Products. This past year, our engineers focused on improving developer experiences across every product. Integrated features like Dev Environments, Docker Extensions, SBOM, and Compose V2 have helped streamline workflows — along with numerous usability and OS-specific improvements. 

Over the last year, the Docker engineering team:

Released 24 new featuresMade 37,000 internal commitsCurated 52 extensions and counting within Docker Desktop and Docker HubHosted over eight million Docker Desktop downloads

We couldn’t have made these improvements without your feedback. Keep your votes, comments, and messages coming — they’re essential for helping us ship the features you need. Keep an eye out for continued updates about UX enhancements, Trusted Open Source, and user-centric partnerships.

How to use SBOMs to support multiple layers

Following Jake, our presenters dove deeper into the technical depths. Next up was a session on viewing images through layered software bills of materials (SBOMs), led by Docker Principal Software Engineer Jim Clark. 

SBOMs are extremely helpful for knowing what’s in your images and apps. But where it gets complex is that many images stem from base images. And even those base images can have their own base images, making full image transparency difficult. Multi-layer images have historically been harder to analyze. To get a full picture of a multi-layer image, you’ll need to know things like:

Which packages are includedHow those packages are distributed between layersHow image rebuilds can impact packagesIf security fixes are available for individual packages

Jim shared that it’s now possible to gather this information. While this feature is still under development, users will soon see layer sizes, total packages per layer, and be able to view complete Dockerfiles on GitHub. 

And as a next step, the team is also focused on understanding shared content and tracking public data. This is another step toward building developer trust, and knowing exactly what’s going into your projects.

Docker Desktop meets multi-platform image support via containerd

Rounding out our major announcements was Djordje Lukic, Staff Software Engineer, with a session on containerd image management. Containerd has been our container runtime since 2016. Since then, we’re extended its integration within Docker Desktop and Docker Engine. 

Containerd migration offers some key benefits: 

There’s less code to maintainWe can ship features more rapidly and shorten release cyclesIt’s easier to improve our developer toolingWe can bring multi-platform support to Docker, while following the Open Container Initiative (OCI) more closely and supporting different snapshotters.

Leveraging containerd more heavily means we can consolidate portions of the Docker Daemon. Check out our containerd announcement blog to learn more. 

Showcasing attendees’ favorite talks

Every Community All-Hands channel hosted unique sets of topics, while each session highlighted relationships between Docker and today’s top technologies. Here are some popular talks from Community All-Hands and why they’re worth watching. 

Developing Go Apps with Docker

From the “Best Practices” channel.

Go (or Golang) is a language well-loved and highly sought after by professional developers. We support it as a core language and maintain a Go language-specific use guide within our docs. 

Follow along with Muhammad Quanit as he explores containerized Go applications. Muhammad covers best practices, the importance of multi-stage builds, and other tips for optimizing your Dockerfiles. By using a Go web server, he demonstrates the “Dockerization” process and the usefulness of IDE extensions.

Integration Testing Your Legacy Java Microservice with docker-maven-plugin

From the “Demos” channel.

Enterprises and development teams often maintain Java code bases upwards of 10 years old. While these services may still be functional, it’s been challenging to bind automated testing to each individual microservice repository. Docker Compose does enable batch testing, but that extra granularity is needed.

Join Terry Brady as he shows you how to run JUnit microservices tests, automated maven testing, code coverage calculation, and even test-resource management. Don’t worry about rewriting your legacy code. Instead, learn how integration testing and dedicated test containers help make life easier. 

How Does Docker Run Machine Learning on Specialized AI Hardware Accelerators

From the “Cutting Edge” channel.

Currently, 35% of companies report using AI in some fashion, while another 42% of respondents say they’re considering it. Machine learning (ML) — a subset of AI — has been critical to creating predictive models, extracting value from big data, and automating many tedious processes. 

Shashank Prasanna outlines just how important specialized hardware is to powering these algorithms. And while ML gains steam, companies are unveiling numerous supporting chipsets and GPUs. How does Docker handle these accelerators? Follow along as Shashank highlights Docker’s capabilities within multi-processor systems, and how these differ from traditional, single-CPU systems from an AI standpoint.

But wait, there’s more! 

The above talks are just a small sample of our learning sessions. Swing by our Docker YouTube channel to browse through our entire content library. 

You can also check out playlists from each event channel: 

Mainstage – showcases of the community and Docker’s latest developments Best Practices – tips to get the most from your Docker applicationsDemos – in-depth presentations that tackle unique use cases, step by stepSecurity – best practices for building stronger, attack-resistant containers and applicationsExtensions – the basics of building extensions while demonstrating their usefulness in different scenariosCutting Edge – talks about how Docker and today’s leading technologies uniteInternational Waters – multilingual tech talks and panel discussions on trendsOpen Source – panels on the Docker Sponsored Open-Source Program and the value of open sourceUnconference – informal talks on getting started with Docker and Docker experiences

Thank you and see you next time!

From key Docker announcements, to technical talks, to our buzzworthy Community Awards ceremony, we had an absolute blast with you at Community All-Hands. Also, a huge special thanks to DJ Alessandro Vozza for keeping the music and excitement going!

And don’t forget to download the latest Docker Desktop to check out the releases and try out any new tricks you’ve learned.

See you at our next All-Hands event, and thank you for making this community stronger. Happy developing!

Learn about our recent releases

Extending Docker’s Integration with containerdThe Docker-Sponsored Open Source Program has a new look!Integrated Terminal for Running Containers, Extended Integration with Containerd, and More in Docker Desktop 4.12
Quelle: https://blog.docker.com/feed/

Einführung von AWS Application Discovery Service Agentless Collector – einem neuen Discovery-Tool für AWS Application Discovery Service

AWS Application Discovery Service enthält jetzt den Application Discovery Service Agentless Collector, um Unternehmenskunden bei der Erfassung von Informationen für ihre Migrationsprojekte zu unterstützen. Die Bereitstellung erfolgt über eine virtuelle Appliance, die in den Rechenzentren des Benutzers installiert wird, sodass mit einer Installation Hunderte von Servern überwacht werden können.
Quelle: aws.amazon.com

Amazon ElastiCache für Memcached ist jetzt HIPAA-konform

Amazon ElastiCache für Memcached ist jetzt HIPAA (Health Insurance Portability and Accountability Act)-konform. Du kannst ElastiCache für Memcached jetzt verwenden, um PHI (Protected Health Information, geschützte Gesundheitsdaten)-Daten zu speichern, zu verarbeiten und darauf zuzugreifen und um sichere Anwendungen für das Gesundheitswesen und Biowissenschaften zu betreiben. ElastiCache für Memcached ist ein vollständig verwalteter, Memcached-kompatibler In-Memory-Schlüsselwertspeicherservice, der Echtzeitanwendungen mit einer Latenzzeit von unter einer Millisekunde unterstützt.
Quelle: aws.amazon.com

AWS Security Hub führt das Thema „Ankündigungen“ ein

AWS Security Hub veröffentlicht Ankündigungen jetzt über den Amazon Simple Notification Service (SNS), damit du über die neuesten Funktionen und Ankündigungen auf dem Laufenden bleibst. Um Ankündigungen über neue AWS-Security-Hub-Funktionen zu erhalten, abonniere das AWS-Security-Hub-SNS-Thema in deiner bevorzugten Region.
Quelle: aws.amazon.com