The labelling of a table column must be meaningful. It doesn't matter if a column name is longer than we would like, but here is not the aesthetic is the relevance when other person has to analyze the table attributes. The same with code function naming, vars, modules, and other. Clear naming representing the goal of the code block really makes the difference. We have to have the discipline to commit in present high quality of code, table naming, even path naming for docker containers in artifact registries. Finally, it will help to educate your eye, when we have to work deep in data quality screens.
Luis J Pinto B’s Post
More Relevant Posts
-
LeetCode Problem 341, "Flatten Nested List Iterator," challenges the creation of an iterator to flatten a nested list structure of integers. The data structure may contain both individual integers and other nested lists, requiring the iterator to traverse deeply through multiple levels of nesting to access all integers linearly. Implementing this involves using a stack to manage the nested elements, pushing nested lists onto the stack and popping them to process elements sequentially. The iterator, on each call to `next()`, ensures it provides the next integer in order, and `hasNext()` confirms if more integers are available. This problem tests understanding of stack usage, design patterns for iterators, and managing complex nested data structures efficiently.
LEETCODE 341 : PUSH ITERATOR PAIR PATTERN : FLATTEN LIST ITERATOR
https://www.youtube.com/
To view or add a comment, sign in
-
Model development is useless if not go to production. ML Experiment-4 (MLops): Deployed BERT model on Kubernetes cluster for a masking task. like Input: "Today is a [MASK] day" Output: Good, Awesome, etc The use case is basic but includes various cool engineering components: 1. Model Inference Server Used Hugging face Transformer library and transformer-cli for creating model inference server. 2. Containerzing the ML application Used Docker containers for portability and pushed to dockerhub (vaibhaw06/bert- kubernetes) 3. Deployment on K8 pods Finally, deploying on K8 pods using minikube and kubectl. ML Umbrella Blog: https://buff.ly/3zB4SGc Github link: https://buff.ly/3zm3OG8
To view or add a comment, sign in
-
YOLO inference with Docker via API
YOLO inference with Docker via API
towardsdatascience.com
To view or add a comment, sign in
-
My method GetArtifacts filtered a list of artifacts based on their IDs and a specified year range. An instance of a Wonder object, presumably providing context for the artifacts. A collection of strings representing the IDs of the artifacts to retrieve. Optional parameters defining the year range for filtering artifacts. My method initialized an empty list list to store the filtered artifacts. Iterated through each artifactId in artifactIds, retrieved the artifact using the GetArtifact method, and checked whether the artifact's ObjectBeginYear and ObjectEndYear fall within the specified year range. If the artifact did meet the criteria, it was added to the list. My method returned a list of artifacts that match the specified criteria. Considered adding checks for null values on wonder and artifactIds to prevent potential runtime exceptions. If i made the code more concise and readable, i used LINQ for filtering. If GetArtifact involved expensive operations, consider caching the artifacts or retrieving them in bulk if possible. Clarified whether artifacts that start or end exactly on fromYear or toYear should be included. Adjust conditions as needed. These adjustments enhanced the robustness and readability of my code.
To view or add a comment, sign in
-
General advice is to make prompts to mega models verbose. But in reiterating an instruction you can inadvertently create ambiguity in the difference in wording between the repetitions creating less predictable responses. Learned this after spending a day iterating on prompt and instructions to get Claude 3 sonnet to make a consistent code refactor across a dozen variations in many files of code.
To view or add a comment, sign in
-
Docker Scout + Jfrog Artifactory Integrating Docker Scout with JFrog Artifactory lets you run image analysis automatically on images in Artifactory registries. https://lnkd.in/g5Dx52Xy
Artifactory integration
docs.docker.com
To view or add a comment, sign in
-
If you’ve worked with Helm charts, you might have seen the _helpers.tpl file and wondered about its purpose. Let me summarize it: _helpers.tpl is a special template file in Helm used for defining reusable snippets of code. It helps you in: Promoting Reusability: Store common logic and values in one place to avoid duplication and keep your charts tidy. Ensure Consistency: Define standard labels, annotations, and other elements to maintain uniformity across your resources. Enhance Flexibility: Create complex functions and templates that simplify your main chart files, making them easier to manage. #DevOps #Kubernetes #HelmCharts
To view or add a comment, sign in
-
"How to Fine-Tune LLMs in 2024 with Hugging Face", but with Dagster, Modal and Llama3! 👉https://lnkd.in/g5cFggsD A couple of months ago Philipp Schmid released a blog post "How to Fine-Tune LLMs in 2024 with Hugging Face" [https://lnkd.in/gYtu-QJ4]. I found it very nice and useful, but after using it repeatedly, I decided to make it reusable, reproducible, and easier to start with! It's an expansion of the original blog post, augmented with Docker, Inc, Dagster, and Modal Labs, and of course, based on Llama3. The full code, Docker image, and model checkpoint are available. #LLM #ML #LLMOps #MLOps #infrastructure #GPU #Llama3 #LLMfinetuning
“How to Fine-Tune LLMs in 2024 with Hugging Face”, but with Dagster, Modal and Llama3!
https://kyrylai.com
To view or add a comment, sign in
-
People making GenAI plugins for VS Code? Please make one that dynamically generates and updates documentation as you write a module. Large code changes are already a big enough pain without having to remember to change the documentation too because you removed a redundant optional parameter.
To view or add a comment, sign in
-
Under pressure to push code but want to maintain quality? Let @GPTScript_ai & @Clio_AI assist in code review using #GPTReview 🤖 This 3-part blog series helps get you set up and integrated with #GPTReview quickly - get started today! ➡️⬇️
Acorn | GPTReview - Build Your Own AI-Based Code Reviewer - Part 1
acorn.io
To view or add a comment, sign in