Amazon Aurora Serverless v1 unterstützt In-Place-Upgrade von MySQL 5.6 auf 5.7

Amazon Aurora Serverless v1 unterstützt In-Place-Upgrade von MySQL 5.6 auf 5.7. Anstatt die Datenbank zu sichern und auf die neue Version wiederherzustellen, können Sie das Upgrade mit nur wenigen Klicks in der Amazon-RDS-Managementkonsole oder mit der neuesten AWS SDK oder CLI durchführen. Dabei wird kein neuer Cluster erstellt, wodurch Sie die gleichen Endpunkte und andere Merkmale des Clusters beibehalten. Das Upgrade wird in wenigen Minuten abgeschlossen, da keine Daten in ein neues Cluster-Volume kopiert werden müssen. Das Upgrade kann sofort oder innerhalb des Wartungsfensters vorgenommen werden. Ihr Datenbank-Cluster steht während des Upgrades nicht zur Verfügung. Weitere Informationen finden Sie in der Aurora-Dokumentation.
Quelle: aws.amazon.com

Building a Mobility Dashboard with Cloud Run and Firestore

Visualization is the key to understanding massive amounts of data. Today we have BigQuery and Looker to analyze petabytes scale data and to extract insights in a sophisticated way. But how about monitoring data that actively changes every second? In this post, we will walk through how to build a real-time dashboard with Cloud Run and Firestore.Mobility DashboardThere are many business use cases that require real-time updates. For example, inventory monitoring in retail stores, security cameras, and MaaS (Mobility as a Service) applications such as share ride. In the MaaS business area, locations of vehicles are very useful in making business decisions. In this post, we are going to build a mobility dashboard, monitoring vehicles on a map in real-time.The ArchitectureThe dashboard should be accessible from the web browser without any setups on the client side. Cloud Run is a good fit because it can generate URLs, and of course, scalable that can handle millions of users. Now we need to implement an app that can plot geospatial data, and a database that can broadcast its update. Here are my choices and architecture.Cloud Run — Hosting a web app (dashboard)(streamlit — a library to visualize data and to make web app)(pydeck — a library to plot geospatial data)Firestore — a full managed database that keeps your data in syncThe diagram below illustrates a brief architecture of the system. In the production environment, you may also need to implement a data ingestion and transform pipeline.Before going to the final form, let’s take some steps to understand each component.Step 1: Build a data visualization web app with Cloud Run + streamlitstreamlit is an OSS web app framework that can create beautiful data visualization apps without knowledge of the front-end (e.g. HTML, JS). If you are familiar with pandas DataFrame for your data analytics, it won’t take time to implement. For example, you can easily visualize your DataFrame in a few lines of code.code_block[StructValue([(u’code’, u”import streamlit as strnchart_data = pd.DataFrame(rn np.random.randn(20, 3),rn columns=[‘a’, ‘b’, ‘c’])rnst.line_chart(chart_data)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54a26e3d90>)])]The chart on the webapp (Source)Making this app runnable on Cloud Run is easy. Just add streamlit in requirements.txt, and make Dockerfile from a typical python webapp image. If you are not familiar with Docker, buildpacks can do the job. Instead of making Dockerfile, make Procfile with just 1 line as below.code_block[StructValue([(u’code’, u’web: streamlit run app.py –server.port $PORT –server.enableCORS=false’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54a3261350>)])]To summarize, the minimum required files are only as below.code_block[StructValue([(u’code’, u’.rn|– app.pyrn|– Procfilern|– requirements.txt’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54b4265b10>)])]Deployment is also easy. You can deploy this app to Cloud Run with just a command.code_block[StructValue([(u’code’, u’$ gcloud run deploy mydashboard –source .’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54b4265310>)])]This command will build and make your image with buildpacks and Cloud Build, thus you don’t need to set up a build environment in your local system. Once deployment is completed, you can access your web app with the generated URL like https://xxx-[…].run.app. Copy and paste the URL into your web browser, and you will see your first dashboard webapp.Step 2: Add a callback function that receive changes in Firestore databaseIn the STEP 1, you can visualize your data with fixed conditions or interactively with UI functions on streamlit. Now we want it to update by itself.Firestore is a scalable NoSQL database, and it keeps your data in sync across client apps through real-time listeners. Firestore is available on Android and iOS, and also provides SDKs in major programming languages. Since we use streamlit in Python, let us use a Python client.In this post we don’t cover detailed usage of Firestore though, it is easy to implement a callback function that is called when a specific “Collection” has been changed. [reference]code_block[StructValue([(u’code’, u”from google.cloud import firestore_v1rnrndb = firestore_v1.Client()rncollection_ref = db.collection(u’users’)rnrndef on_snapshot(collection_snapshot, changes, read_time):rn for doc in collection_snapshot.documents:rn print(u'{} => {}’.format(doc.id, doc.to_dict()))rnrn# Watch this collectionrncollection_watch = collection_ref.on_snapshot(on_snapshot)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54b42658d0>)])]In this code, on_snapshot callback function is called when users Collection has been changed. You can also watch changes of Document.Since Firestore is a fully managed database, you would not need to provision the service ahead. You only need to choose “mode” and location. To use real-time sync functionality, select “Native mode”. Also select nearest or desired location.Using Firestore with streamlitNow let’s implement Firestore with streamlit. We add on_snapshot callback and update a chart with the latest data sent from Firestore. Here is one quick note when you use the callback function with streamlit. on_snapshot function is executed in a sub thread, instead UI manipulation in streamlit must be executed in a main thread. Therefore, we use Queue to sync the data between threads. The code will be something like below.code_block[StructValue([(u’code’, u’from queue import Queuernrnq = Queue()rndef on_snapshot(collection_snapshot, changes, read_time):rn for doc in collection_snapshot.documents:rn q.put(doc.to_dict()) # Put data into the Queuernrn# below will run in main threadrnsnap = st.empty() # placeholderrnrnwhile True:rn # q.get() is a blocking function. thus recommend to add timeoutrn doc = q.get() # Read from the Queuern snap.write(doc) # Change the UI’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54a36850d0>)])]Deploy this app and write something in the collection you refer to. You will see the updated data on your webapp.Step 3: Plot a geospatial data with streamlitWe learned how to host web apps on Cloud Run, then how to update data with Firestore. Now we want to know how to plot geospatial data with streamlit. streamlit has multiple ways to plot geospatial data which includes latitude and longitude, we here used pydeck_plot(). This function is a wrapper of deck.gl, a geospatial visualization library.For example, provide data in latitude and longitude as to plot, add layers to visualize them.code_block[StructValue([(u’code’, u’import streamlit as strnimport pydeck as pdkrnimport pandas as pdrnimport numpy as nprnrndf = pd.DataFrame(rn np.random.randn(1000, 2) / [50, 50] + [37.76, -122.4],rn columns=[‘lat’, ‘lon’])rnst.pydeck_chart(pdk.Deck(rn map_provider=”carto”,rn map_style=’road’,rn initial_view_state=pdk.ViewState(rn latitude=37.76,rn longitude=-122.4,rn zoom=11,rn pitch=50,rn ),rn layers=[rn pdk.Layer(rn ‘HexagonLayer’,rn data=df,rn get_position='[lon, lat]’,rn radius=200,rn elevation_scale=4,rn elevation_range=[0, 1000],rn pickable=True,rn extruded=True,rn ),rn pdk.Layer(rn ‘ScatterplotLayer’,rn data=df,rn get_position='[lon, lat]’,rn get_color='[200, 30, 0, 160]’,rn get_radius=200,rn ),rn ],rn ))’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54a3685750>)])]Plotting with pydeck_plot (Source)pydeck supports multiple map platforms. We here chose CARTO. If you would like to know more about great examples using CARTO and deck.gl, please refer to this blog.Step 4: Plot mobility dataWe are very close to the goal. Now we want to plot locations of vehicles. pydeck supports some ways to plot data, and TripsLayer would be a good fit to plot mobility data.Demo using Google Maps JavaScript API (Source)TripsLayer can visualize location data in time sequential. That means, when selecting a specific timestamp, it plots lines from location data in the time including last n periods. It also draws like an animation when you change the time in sequential order.In the final form, we also add IconLayer to identify the latest location. This layer is also useful when you want to plot a static location, and it just works like a “pin” on Google Maps.Now we need to think about how to use this plot with Firestore. Let’s make Document per vehicle, and only save the latest latitude, longitude, and timestamp of every vehicle. Why not save the history of locations? In that case, we should rather use BigQuery. We just want to see the latest locations that update in realtime.Firestore is useful and scalable, yet NoSQL. Note that there are some good fits and bad fits in NoSQL.Location data in Firestore ConsoleStep 5: RunFinally, we are here. Now let’s ride in a car and record data… if possible.For demo purposes, now we ingest dummy data into Firestore. It is easy to write data by using a client library.code_block[StructValue([(u’code’, u”db = firestore.Client()rncol_ref = db.collection(‘connected’)rncol_ref.document(str(vehicle_ind)).set({rn ‘lonlat': [-74, 40.72],rn ‘timestamp': 0rn})”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e54a350c950>)])]With writing dummy data, open the web page hosted on Cloud Run. you will see the map is updated upon new data coming.Firestore syncs data on streamlitNote that we used dummy data and manipulated the timestamps. Consequently, the location data updates much faster than actual time. This can be fixed once you use proper data and update cycle.Try it with your dataIn this post, we learned how to build a dashboard updated in real-time with Cloud Run and Firestore. Let us know when you find other use-cases with those nice Google Cloud products.Find out more automotive solutions here.Haven’t used Google Cloud yet? Try it from here.Check out the source code on GitHub.Related ArticleDiscover our new edge concepts at Hannover Messe that bring smart factories to lifeIntel and Google Cloud demonstrate edge-to-cloud technology at Hannover Messe.Read Article
Quelle: Google Cloud Platform

Announcing new BigQuery capabilities to help secure sensitive data

In order to better serve their customers and users, digital applications and platforms continue to store and use sensitive data such as Personally Identifiable Information (PII), genetic and biometric information, and credit card information. Many organizations that provide data for analytics use cases face evolving regulatory and privacy mandates, ongoing risks from data breaches and data leakage, and a growing need to control data access. Data access control and masking of sensitive information is even more complex for large enterprises that are building massive data ecosystems. Copies of datasets often are created to manage access to different groups. Sometimes, copies of data are obfuscated while other copies aren’t. This creates an inconsistent approach to protecting data, which can be expensive to manage. To fully address these concerns, sensitive data needs to be protected with the right defense mechanism at the base table itself so that data can be kept secure throughout its entire lifecycle. Today, we’re excited to introduce two new capabilities in BigQuery that add a second layer of defense on top of access controls to help secure and manage sensitive data. 1. General availability of BigQuery column-level encryption functionsBigQuery column-level encryption SQL functions enable you to encrypt and decrypt data at the column level in BigQuery. These functions unlock use cases where data is natively encrypted in BigQuery and must be decrypted when accessed. It also supports use cases where data is externally encrypted, stored in BigQuery, and must then be decrypted when accessed. SQL functions support industry standard encryption algorithms AES-GCM (non-deterministic) and AES-SIV (deterministic).  Functions supporting AES-SIV allow for grouping, aggregation, and joins on encrypted data. In addition to these SQL functions, we also integrated BigQuery with Cloud Key Management Service (Cloud KMS). This gives you additional control, and allows you to manage your encryption keys in KMS and enables on-access secure key retrieval as well as detailed logging. An additional layer of envelope encryption enables generations of wrapped key sets to decrypt data. Only users with permission to access the Cloud KMS key and the wrapped keyset can unwrap the keyset and decrypt the ciphertext. “Enabling dynamic field level encryption is paramount for our data fabric platform to manage highly secure, regulated assets with rigorous security policies complying with several regulations including FedRAMP, PCI, GDPR, CCPA and more. BigQuery column-level encryption capability provides us with a secure path for decrypting externally encrypted data in BigQuery unblocking analytical use cases across more than 800+ analysts,” said Kumar Menon, CTO of Equifax.Users can also leverage available SQL functions to support both non-deterministic encryption and deterministic encryption to enable joins and grouping of encrypted data columns.The following query sample uses non-deterministic SQL functions to decrypt ciphertext.code_block[StructValue([(u’code’, u’SELECTrn AEAD.DECRYPT_STRING(KEYS.KEYSET_CHAIN(rn @kms_resource_name,rn @wrapped_keyset),rn ciphertext,rn additional_data)rnFROMrn ciphertext_tablernWHERErn …’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3edc9b976150>)])]The following query sample uses deterministic SQL functions to decrypt ciphertext.code_block[StructValue([(u’code’, u’SELECTrn DETERMINISTIC_DECRYPT_STRING(KEYS.KEYSET_CHAIN(rn @kms_resource_name,rn @wrapped_keyset),rn ciphertext,rn additional_data)rn FROMrn ciphertext_tablernWHERErn …’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3edc9b9764d0>)])]2. Preview of dynamic data masking in BigQueryExtending BigQuery’s column-level security, dynamic data masking allows you to obfuscate sensitive data and control user access while mitigating the risk of data leakage. This capability selectively masks column level data at query time based on the defined masking rules, user roles and privileges. Masking eliminates the need to duplicate data and allows you to define different masking rules on a single copy of data to desensitize data, simplify user access to sensitive data, and protect against compliance, privacy regulations, or confidentiality issues. Dynamic data masking allows for different transformations of underlying sensitive data to obfuscate data at query time. Masking rules can be defined on the policy tag in the taxonomy to grant varying levels of access based on the role and function of the user and the type of sensitive data. Masking adds to the existing access controls to allow customers a wide gamut of options around controlling access. An administrator can grant a user full access, no access or partial access with a particular masked value based on data sharing use case.For the preview of data masking, three different masking policies are being supported: ALWAYS_NULL. Nullifies the content regardless of column data types.SHA256. Applies SHA256 to STRING or BYTES data types. Note that the same restrictions apply to the SHA256 function.Default_VALUE. Returns the default value based on the data type.A user must first have all of the permissions necessary to run a query job against a BigQuery table to query it. In addition, for users to view the masked data of a column tagged with a policy tag they need to have a MaskedReader role.When to use dynamic data masking vs encryption functions?Common scenarios for using data masking or column level encryption are: protect against unauthorized data leakage access control management compliance against data privacy laws for PII, PHI, PCI datacreate safe test datasetsSpecifically, masking can be used for real-time transactions whereas encryption provides additional security for data at rest or in motion where real-time usability is not required.  Any masking policies or encryption applied on the base tables are carried over to authorized views and materialized views, and masking or encryption is compatible with other security features such as row-level security. These newly added BQ security features along with automatic DLP can help to scan your data across your entire organization, give you visibility into where sensitive data is stored, and enable you to manage access and usability of data for different use cases across your user base. We’re always working to enhance BigQuery’s (and Google Cloud’s) data governance capabilities, to enable end to end management of your sensitive data. With the new releases, we are adding deeper protections for your data in BigQuery. Related ArticleBuild a secure data warehouse with the new security blueprintIntroducing our new security blueprint that helps enterprises build a secure data warehouse.Read Article
Quelle: Google Cloud Platform

Introducing Firehose: An open source tool from Gojek for seamless data ingestion to BigQuery and Cloud Storage

Indonesia’s largest hyperlocal company, Gojek has evolved from a motorcycle ride-hailing service into an on-demand mobile platform, providing a range of services that include transportation, logistics, food delivery, and payments. A total of 2 million driver-partners collectively cover an average distance of 16.5 million kilometers each day, making Gojek Indonesia’s de-facto transportation partner.To continue supporting this growth, Gojek runs hundreds of microservices that communicate across multiple data centers. Applications are based on an event-driven architecture and produce billions of events every day. To empower data-driven decision-making, Gojek uses these events across products and services for analytics, machine learning, and more.Data warehouse ingestion challenges To make sense of large amounts of data — and to better understand customers for the purpose of app development, customer support, growth, and marketing purposes — data must first be ingested into a data warehouse. Gojek uses BigQuery as its primary data warehouse. But ingesting events at Gojek’s scale, with rapid changes, poses the following challenges:With multiple products and microservices offered, Gojek releases new Kafka topics almost every day and they need to be ingested for analytical purposes. This can quickly result in significant operational overhead for the data engineering team that is deploying new jobs to load data into BigQuery and Cloud Storage. Frequent schema changes in Kafka topics require consumers of those topics to load the new schema to avoid data loss and capture more recent changes. Data volumes can vary and grow exponentially as people start building new products and logging new activities on top of a new topic. Each topic can also have a different load during peak business hours. Customers need to handle the rising volume of data to quickly scale per their business needs.Firehose and Google Cloud to the rescue To solve these challenges, Gojek uses Firehose, a cloud-native service to deliver real-time streaming data to destinations like service endpoints, managed databases, data lakes, and data warehouses like Cloud Storage and BigQuery. Firehose is part of the Open Data Ops Foundation (ODPF), and is fully open source. Gojek is one of the major contributors to ODPF.Here are Firehose’s key features:Sinks – Firehose supports sinking stream data to the log console, HTTP, GRPC, PostgresDB (JDBC), InfluxDB, Elastic Search, Redis, Prometheus, MongoDB, GCS, and BigQuery.Extensibility – Firehose allows users to add a custom sink with a clearly defined interface, or choose from existing sinks.Scale – Firehose scales in an instant, both vertically and horizontally, for a high-performance streaming sink with zero data drops.Runtime – Firehose can run inside containers or VMs in a fully-managed runtime environment like Kubernetes.Metrics – Firehose always lets you know what’s going on with your deployment, with built-in monitoring of throughput, response times, errors, and more.Key advantagesUsing Firehose for ingesting data in BigQuery and Cloud Storage has multiple advantages. Reliability Firehose is battle-tested for large-scale data ingestion. At Gojek, Firehose streams 600 Kafka topics in BigQuery and 700 Kafka topics in Cloud Storage. On average, 6 billion events are ingested daily in BigQuery, resulting in more than 10 terabytes of daily data ingestion.  Streaming ingestionA single Kafka topic can produce up to billions of records in a day. Depending on the nature of the business, scalability and data freshness are key to ensuring the usability of that data, regardless of the load. Firehose uses BigQuery streaming ingestion to load data in near-real-time. This allows analysts to query data within five minutes of it being produced.Schema evolutionWith multiple products and microservices offered, new Kafka topics are released almost every day, and the schema of Kafka topics constantly evolves as new data is produced. A common challenge is ensuring that as these topics evolve, their schema changes are adjusted in BigQuery tables and Cloud Storage. Firehose tracks schema changes by integrating with Stencil, a cloud-native schema registry, and automatically updates the schema of BigQuery tables without human intervention. This reduces data errors and saves developers hundreds of hours. Elastic infrastructureFirehose can be deployed on Kubernetes and runs as a stateless service. This allows Firehose to scale horizontally as data volumes vary.Organizing data in cloud storage Firehose GCS Sink provides capabilities to store data based on specific timestamp information, allowing users to customize how their data is partitioned in Cloud Storage.Supporting a wide range of open source softwareBuilt for flexibility and reliability, Google Cloud products like BigQuery and Cloud Storage are made to support a multi-cloud architecture. Open source software like Firehose is just one of many examples that can help developers and engineers optimize productivity. Taken together, these tools can deliver a seamless data ingestion process, with less maintenance and better automation.How you can contributeDevelopment of Firehose happens in the open on GitHub, and we are grateful to the community for contributing bug fixes and improvements. We would love to hear your feedback via GitHub discussions or Slack.Related ArticleTransform satellite imagery from Earth Engine into tabular data in BigQueryWith Geobeam on Dataflow, you can transform Geospatial data from raster format in Earth Engine to vector format in BigQuery.Read Article
Quelle: Google Cloud Platform

Pride Month: Q&A with bunny.money founders about saving for good

June is Pride Month—a time for us to come together to bring visibility and belonging, and celebrate the diverse set of experiences, perspectives, and identities of the LGBTQ+ community. This month, Lindsey Scrase, Managing Director, Global SMB and Startups at Google Cloud, is showcasing conversations with startups led by LGBTQ+ founders and how they use Google Cloud to grow their businesses. This feature highlights bunny.money and its founders, Fabien Lamaison, CEO, Thomas Ramé, Technology Lead, and Cyril Goust, Engineering Lead. Lindsey: Thanks Fabien, Thomas, and Cyril. It’s great to connect with you and talk about bunny.money. I love how you’re bringing a creative twist to fintech and giving back to communities. What inspired you to found the company?Fabien: One of my favorite childhood toys was an old-fashioned piggy bank. I remember staring at it and trying to figure out how much of my allowance should be saved, spent, or given to charity. As you can imagine, there were lots of ideas racing through my mind but saving and giving back were always important to me. Years later, I realized I could combine my passions for banking, technology, and helping others by creating a fintech service that makes it easy for people to save while donating to their favorite causes.Fabien Lamaison, CEO of bunny.moneyLindsey: My brothers and I did something similar where we allocated a portion of any money we made as kids to giving. And I too had a piggy bank – a beautiful one that could only be opened by breaking it. Needless to say it was a good saving mechanism! It’s inspiring to see you carrying your personal value forward into bunny.money to help others do the same. Tell us more about bunny.money?Fabien: bunny.money plays with the concept of reimagining saving—and offers a way to positively disrupt conventional banking. For us bunnybankers, financial and social responsibility go hand in hand. We empower people to build more sustainable, inclusive financial futures. Looking ahead, we not only want to help people set up recurring schedules for saving and donating, but also offer more options for socially responsible investing and help companies better match employee donations to charitable causes and build out retirement plans.Lindsey: It sounds like you’re not only disrupting traditional banking services but also how people manage their finances. How does bunny.money serve its customers?Fabien: bunny.money is a fintech company founded on the principles of providing easy, free, and ethical banking services. Our comprehensive banking platform enables customers to quickly open savings wallet and schedule recurring deposits.Thomas: bunny.money is also a fintech bridge that connects people and businesses to the communities and causes they care about. With bunny.money, customers can make one-time or recurring donations to the nonprofits of their choice. bunny.money doesn’t charge recipients fees to process donations. We give customers the option of offering us a tip, but it’s not required.Lindsey: So with bunny.money, what are some of the nonprofits people can donate to?Fabien: Over 30 organizations have already joined bunny.money’s nonprofit marketplace, includingStartOut,TurnOut,Trans Lifeline, and Techqueria. Some are seeing donations increase by up to 20 percent as they leverage bunny.money to gamify fundraising, promote social sharing, and encourage micro-donations from their members and supporters.Cyril: bunny.money also helps people discover local causes and nonprofits such as food banks requesting volunteers, parks that need to be cleaned, and mentoring opportunities. I’m particularly excited to see bunny.money help people build a fairer, greener society by donating to environmental nonprofits, including, Carbon Lighthouse,Sustainable Conservation, Public Land Water Association, back2earth andFARMS. We also decided to “lead by the example” and pledge to give 1% of our revenues to 1% for the Planet.Lindsey: Given your business and the services you offer, I imagine you’ve encountered immense complexity along the way. What were some of the biggest challenges that you had to overcome?Fabien: One of our biggest challenges was helping people understand saving for good, and purpose-led banking, which is a relatively new idea in fintech. Although there are plenty of mobile banking apps, most don’t offer an easy way for people to improve their personal finances and donate to their favorite causes in one convenient place.Cyril: On the technical side, we needed to comply with strict industry regulations, including all applicable requirements under the Bank Secrecy Act and the USA PATRIOT Act. These regulations protect sensitive financial data and help fight against fraudulent activities such as money laundering.Lindsey: Can you talk about how Google Cloud is helping you address these challenges?  Thomas: Protecting client data is a top priority for us, so we built bunny.money on thehighly secure-by-design infrastructure of Google Cloud. Google Cloud automatically encrypts data in transit and at rest, and the solutions comply with all major international security standards and regulations right out of the box. Although we serve customers in the U.S. today, Google Cloud distributed data centers will allow us to meet regional security requirements and eventually reach customers worldwide with quality financial services.Thomas Ramé, Technology Lead at bunny.moneyFabien: We wanted to build a reliable, feature-rich fintech platform and design a responsive mobile app with an intuitive user interface (UI). We knew from experience that Google Cloud is easy to use and offers integrated tools, APIs, and solutions. We also wanted to tap into the deep technical knowledge of theGoogle for Startups team to help us scale bunny.money and affordably trial different solutions with Google for Startups Cloud Program credits.Cyril: As aCertified Benefit Corporation™ (B Corp™), it is also important for us to work with companies that align with the values we champion such as diversity and environmental sustainability. Google Cloud iscarbon neutral and enables us to accuratelymeasure, report, and reduce our cloud carbon emissions. Lindsey: This is exactly how we strive to support startups at all stages – with the right technology, offerings, and support to help you scale quickly and securely, all while being the cleanest cloud in the industry. Can you go into more detail about the Google Cloud solutions you use—and how they all come together to support your business and customers? Fabien: Our save for good® mobile app enables customers to securely create accounts, verify identities, and connect to external banks in just under four minutes. Thomas: With Google Cloud, bunny.money consistently delivers a reliable, secure, and seamless banking experience. Since recently launching our fintech app, we’ve already seen an incredible amount of interest in our services that enable people to grow financially while contributing to causes they are passionate about. Right now, we’re seeing customers typically allocate about 10 percent of each deposit to their favorite charities.Cyril: The extensive Google Cloud technology stack helps us make it happen. We can useBigQuery to unlock data insights,Cloud SQL to seamlessly manage relational database services, andGoogle Kubernetes Engine (GKE) to automatically deploy and scale Kubernetes. These solutions enable us to cost-effectively scale bunny.money and build out a profitable fintech platform.Cyril Goust, Engineering Lead at bunny.moneyThomas: In addition to the solutions Cyril mentioned, we useCloud Scheduler to manage cron job services,Dataflow to unify stream and batch data processing, andContainer Registry to securely store Docker container images. We’re always innovating, and Google Cloud helps our small team accelerate the development and deployment of new services.Lindsey: It’s exciting to hear your story and the many different ways that Google Cloud technology has been able to support you along the way. You’re creating something that affects change on many levels—from how people save and give to how businesses and nonprofits can engage.Since it is also Pride month, I want to change focus for a minute and talk about how being part of the LGBTQ+ community impacted your approach to starting bunny.money?Fabien: I believe we all belong to several communities (family, friends “tribes,” sports, group of interests) that are different layers of our own identity and way of life. I’m part of the LGBTQ+ community, and I’m also an immigrant for example. I’m now a French-American, as is my husband, and we live in San Francisco. But even as a couple, we still had to live apart for several years—he in Paris and I in San Francisco—as we worked through issues with his U.S. work visa (same sex weddings were not possible at that time at the federal level, we couldn’t be under the same visa application).Fortunately, the LGBTQ+ community can be like an extended family, both professionally and personally. Personally, I’ve had the support of friends as my husband and I dealt with immigration and work challenges. And professionally, I’ve experienced incredible support in the startup world with nonprofits such asStartOut, which provides key resources to help LGBTQ+ entrepreneurs grow their businesses.Lindsey: I can only imagine the emotional toll that being apart created for you and your husband and I’m so glad that it eventually worked out. My wife is Austrian and while we are fortunate to be here together, this intersectionality has created an additional layer of complexity for us over the years as we have started a family. Do you have any advice for others in the LGBTQ+ community looking to start and grow their own companies? You mentioned StartOut, and I know there are additional organizations LGBTQ+ entrepreneurs can turn to for help, includingLesbians who Tech,Out in Tech,High Tech Gays (HTG) – Queer Silicon Valley, andQueerTech NYC (Meetup).Fabien: I would suggest really exploring what you’re passionate about. I’ve enjoyed focusing on saving and finances since I was young and have always been passionate about giving back. Being part of the LGBTQ+ community—or really any community that’s viewed as an “outsider”—gives you the opportunity to think differently. When you bring your passion and life experiences together, you can start to imagine new ways of doing things. By engaging in your communities, it can be easier to find others who share your experiences, interests, and even values. You bring the best from each world.Since LGBTQ+ founders and entrepreneurs might belong to several groups, it’s good to explore all available avenues and resources, including the organizations you mentioned earlier. We can always learn and accomplish more when we work together. I’ve experienced that both in the LGTBQ+, immigrant and Fintech communities.Lindsey: The importance of community underlies so many aspects of your identity as a founder, as someone who has moved to the US from France, and as a member of the LGBTQ+ community. I’m so glad that you’ve sought out – and received – support along the way. I agree it’s so important  for others to seek out this community and support.  And to close, would you be able to share any next steps for bunny.money?Fabien: We’re looking forward to helping customers build more sustainable and inclusive financial futures on our platform. We’ll continue contributing to positive change in the world by rolling out new AI-powered services to enable ethical investing and personalized giving and impact programs. As we build this first banking app for personal and workplace giving, our goal is to benefit all communities by bridging the gap between businesses and people—which is why we’re excited to continue working with partners like Google for Startups andGV (GV offers us valuable mentor sessions during our accelerator program at StartOut).If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticlePride Month: Q&A with Beepboop founders about more creative, effective approaches to learning a new languageRead how Beepboop democratizes language instruction by helping students learn to speak Spanish and English in dynamic, fun environments l…Read Article
Quelle: Google Cloud Platform