Hello, My name is Francisco Mateu Araneda and I have 3 years of experience in the technology and software development, mainly for the fields of telephony and mining companies. I am considered a Proactive Professional with high skills and high speed to to learn new technologies, to estimate, develop, test and support informatic systems.
I have experience in a wide languages and technologies, such as: HTML5 , CSS3 , Bootstrap4+ , Javascript, Jquery, PHP, .NET Core C#, NodeJS, Angular, React, Python, Java, among others.
Regarding DMBS (Data Base Management Systems), I have wide experience using MySQL, PostgreSQL, SQL Server 2016, MongoDB. Also, I know how to access to the DMBS's on the high code development software with the ODBC / JDBC / ADO drivers.
I have high experience securing systems using the Basic (User / Password) and Auth 2.0 Tokens (Bearer Tokens, usually with JWT Standard).
I have also cloud experience at Azure, Amazon Web Services and Google Cloud Platform. I can help on software development that integrates:
Cloud Instances: Azure VMs, AWS and GCP instances.
Cloud Storages/DataLakes: Azure Blob Storage, AWS Buckets and GCP Buckets.
Cloud APIs: Azure APP Service, Amazon Elastic Beanstalk and Google App Engine.
Cloud Serverless: Azure Functions, AWS Lambda and GCP Functions.
Cloud Data WareHouse: Azure Synapse Services, AWS Athena and Google BigQuery.
Cloud Data Streaming: Azure Event Hub, AWS Kinesis and GCP Eventarc.
BI Technologies: Microsoft Power BI, Looker Studio (Google Data Studio), Qliksense.
Cloud Data Orchestation: Azure Data Factory, AWS Glue and Cloud Data Fusion.
I have used Databricks + Apache Spark clusters with the pyspark library on an Apache Spark Job on the Azure cloud (Which can be integrated on AWS and GCP), for processing bigdata OnDemand / Periodically on the different Datalake zones, to finally generate the DataWarehouse, the External tables on the DataWarehouse from the Apache Parquet (Best practice for the speed and the size of the data due to parquet compression), Apache Avro, JSON or XML files to store the bigdata which will be consumed by the BI Platform, which can refresh the data OnDemand or periodically too (Just as the Apache Spark Job).
I can use docker to dockerize the apps, which acts as a middleware for the different OS.
I have knowledge regarding Architecture, which involves Real Time Client/Server using Sockets (For example the [login to view URL] library), good practices with API. For example if the API is the Center node if the Informatic System is using for example the REST API Architecture or MVC (Model / View / Controller), I can integrate this API to another HTTPRequest/Streaming APIs, databases, File Servers, among others.
I have also little experience on Data Science, I have occupied OCR under the Python language using Spark/Pandas/Numpy to get data from image/PDFs.
I constantly study and I have 3 Certifications.
For more info: [login to view URL] (Will be updated soon)