Machine Learning Site

  • Home
  • Machine Learning Site

Machine Learning Site Contact information, map and directions, contact form, opening hours, services, ratings, photos, videos and announcements from Machine Learning Site, Digital creator, .

What if your code could talk back? I built this silly, custom REPL that compliments you, apologizes, or throws sarcastic...
20/09/2025

What if your code could talk back? I built this silly, custom REPL that compliments you, apologizes, or throws sarcastic shade before running your code. Have a look, how your code replies to your commands:

Pathetic Programming 3: A Python REPL that compliments, apologizes & roasts you. Python Interpreter has never been this politely ridiculous.

Day 11 of  Today we explored learning rate scheduling, a technique that adjusts the learning rate during training. This ...
07/09/2025

Day 11 of

Today we explored learning rate scheduling, a technique that adjusts the learning rate during training. This can significantly improve performance and stability.

🔹 Why it matters:

If the learning rate is too high, the optimizer overshoots and the model won’t converge.

If it’s too low, training becomes unnecessarily slow.

The learning rate you start with isn’t always the best throughout training.

🔹 Schedulers in PyTorch:

StepLR: Reduce learning rate every fixed number of epochs.

ExponentialLR: Continuously decay the LR by a factor each epoch.

ReduceLROnPlateau: Lower the LR when validation loss stops improving.

🔹 Workflow:

1️⃣ Define your optimizer (e.g., SGD, Adam).

2️⃣ Attach a scheduler like StepLR or ReduceLROnPlateau.

3️⃣ Call scheduler.step() after each epoch (or batch, depending on use case).

4️⃣ Monitor learning rate with scheduler.get_last_lr().

📌 Key takeaway: Using a scheduler helps your model learn fast in the beginning and fine-tune weights later, often leading to better accuracy and smoother convergence.

Day 9 of  Dropout is a regularization technique to reduce overfitting by randomly setting a fraction of activations to z...
29/08/2025

Day 9 of

Dropout is a regularization technique to reduce overfitting by randomly setting a fraction of activations to zero during training.

👉 Encourages redundancy in learning
👉 Probability p controls how many neurons are dropped
👉 Active only during training (model.train())
👉 Disabled during evaluation/inference (model.eval())

Dropout makes models more robust and generalizable.

New blog is live! I just published   Tutorial: URDF for Robot Modeling (Part 1).In this post, I break down robot modelin...
25/08/2025

New blog is live! I just published Tutorial: URDF for Robot Modeling (Part 1).

In this post, I break down robot modeling into simple, practical steps — perfect if you’re starting out with ROS2 and URDF.


Getting started with robot modeling in ROS2? This guide introduces URDF in the most practical way—by building a simple robot from the scratch and seeing it come alive in RViz2.

Day 7 of  Today we learned how to save and load PyTorch models, which is crucial for real-world projects.Step-by-step: 1...
25/08/2025

Day 7 of

Today we learned how to save and load PyTorch models, which is crucial for real-world projects.

Step-by-step:

1️⃣ state_dict: Contains all model parameters like weights and biases.

2️⃣ Saving: Use torch.save(model.state_dict(), filename) to store parameters.

3️⃣ Loading: Create a new model instance and load parameters using load_state_dict.

4️⃣ Evaluation mode: Use .eval() for inference to disable dropout/batchnorm effects.

5️⃣ Use case: Resume training, share models, or deploy trained networks.
Saving and loading models ensures reproducibility and makes it easier to manage large projects.

23/08/2025

Vehicle URDF model is complete and visualized in RViz2. Next step: refactoring with Xacro to make the robot description modular, easier to maintain, and ready for future extensions.

Day 6 of  Today we explored Datasets and DataLoaders, which make handling and batching data in PyTorch easy and efficien...
20/08/2025

Day 6 of

Today we explored Datasets and DataLoaders, which make handling and batching data in PyTorch easy and efficient.

Step-by-step:

1️⃣ Dataset: Encapsulates your data and defines how to access individual samples. Can be built from tensors, NumPy arrays, or custom files.

2️⃣ DataLoader: Automatically handles batching, shuffling, and parallel loading. Improves training speed and stability.

3️⃣ Batches: Models are trained on mini-batches instead of the full dataset at once, which saves memory and improves convergence.

4️⃣ Iteration: Use a simple for loop to access each batch during training.
Using Datasets and DataLoaders is a foundational step for scalable PyTorch projects.

19/08/2025

Here’s the core of our robot — a rectangular chassis block. This base will support all the key components: wheels, motors, sensors, and controllers, defining the robot’s size, weight, and movement capabilities.

Day 5 of  Today we learned about loss functions and optimizers—the two key components that make model training possible....
18/08/2025

Day 5 of

Today we learned about loss functions and optimizers—the two key components that make model training possible.

Step-by-step:

1️⃣ Loss functions: Measure how far predictions are from targets. Example: MSELoss for regression tasks.

2️⃣ Optimizers: Adjust model weights to minimize loss. Example: SGD (Stochastic Gradient Descent).

3️⃣ Training loop:
- Forward pass → calculate predictions
- Compute loss → loss.backward() finds gradients
- Optimizer step → updates model weights

4️⃣ optimizer.zero_grad(): Clears old gradients before next step.
This forward → loss → backward → step cycle is the backbone of every deep learning model in PyTorch.

Understand the fundamentals of Bayes’ Theorem and see its real-world application in   based text classification.
17/08/2025

Understand the fundamentals of Bayes’ Theorem and see its real-world application in based text classification.

Learn Bayes’ Theorem with Python and build a Naive Bayes spam classifier from scratch to see probabilities in action.

Day 4 of  Today we explored PyTorch tensors and automatic differentiation, which are the foundation of all models.Step-b...
16/08/2025

Day 4 of

Today we explored PyTorch tensors and automatic differentiation, which are the foundation of all models.

Step-by-step:

1️⃣ Tensors: Core data structure in PyTorch; can store data on CPU or GPU.

2️⃣ requires_grad=True: Tells PyTorch to track operations on this tensor.

3️⃣ Operations: Any math operation on tensors builds a computation graph.

4️⃣ Backward: Calling .backward() computes gradients automatically.

5️⃣ Gradients: Stored in .grad, used by optimizers for updating model weights.

Understanding tensors and autograd is essential for training neural networks and debugging complex models.

Address


Alerts

Be the first to know and let us send you an email when Machine Learning Site posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Contact The Business

Send a message to Machine Learning Site:

  • Want your business to be the top-listed Media Company?

Share