MYCSS

2024-03-28

Natural Language Processing on Google Cloud | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: Natural Language Processing on Google Cloud


Natural Language Processing on Google Cloud, Mar 26, 2024

Summary

This course introduces the products and solutions to solve NLP problems on Google Cloud. Additionally, it explores the processes, techniques, and tools to develop an NLP project with neural networks by using Vertex AI and TensorFlow.

  1. NLP on Google Cloud
  2. NLP with Vertex AI
  3. Text representatation
  4. NLP models

2024-03-26

Computer Vision Fundamentals on Google Cloud | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: Computer Vision Fundamentals on Google Cloud

Computer Vision Fundamentals on Google Cloud,  Mar 25, 2024

Summary

This course describes different types of computer vision use cases and then highlights different machine learning strategies for solving these use cases. The strategies vary from experimenting with pre-built ML models through pre-built ML APIs and AutoML Vision to building custom image classifiers using linear models, deep neural network (DNN) models or convolutional neural network (CNN) models.

The course shows how to improve a model's accuracy with augmentation, feature extraction, and fine-tuning hyperparameters while trying to avoid overfitting the data.

The course also looks at practical issues that arise, for example, when one doesn't have enough data and how to incorporate the latest research findings into different models.

Learners will get hands-on practice building and optimizing their own image classification models on a variety of public datasets in the labs they will work on.

  • Module 1: Introduction to Computer Vision and Pre-built ML Models with Vision API
  • Module 2: Vertex AI and AutoML Vision on Vertex AI
  • Module 3: Custom Training with Linear, Neural Network and Deep Neural Network model
  • Module 4: Convolutional Neural Networks
  • Module 5: Dealing with Image Data

2024-03-24

Production Machine Learning Systems | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: Production Machine Learning Systems

Production Machine Learning Systems, Mar 23, 2024

Summary

This course covers how to implement the various flavors of production ML systems— static, dynamic, and continuous training; static and dynamic inference; and batch and online processing. You delve into TensorFlow abstraction levels, the various options for doing distributed training, and how to write distributed training models with custom estimators.

This is the second course of the Advanced Machine Learning on Google Cloud series. After completing this course, enroll in the Image Understanding with TensorFlow on Google Cloud course.

  • Module 1: Architecting Production ML Systems
  • Module 2: Designing Adaptable ML Systems
  • Module 3: Designing High-performance ML Systems
  • Module 4: Hybrid ML Systems

2024-03-20

Machine Learning in the Enterprise | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: Machine Learning in the Enterprise

Machine Learning in the Enterprise - Mar 20, 2024

Summary

This course encompasses a real-world practical approach to the ML Workow: a case study approach that presents an ML team faced with several ML business requirements and use cases. This team must understand the tools required for data management and governance and consider the best approach for data preprocessing: from providing an overview of Dataow and Dataprep to using BigQuery for preprocessing tasks.

The team is presented with three options to build machine learning models for two specic use cases. This course explains why the team would use AutoML, BigQuery ML, or custom training to achieve their objectives. A deeper dive into custom training is presented in this course. We describe custom training requirements from training code structure, storage, and loading large datasets to expoing a trained model.

You will build a custom training machine learning model, which allows you to build a container image with lile knowledge of Docker.

The case study team examines hyperparameter tuning using Veex Vizier and how it can be used to improve model peormance. To understand more about model improvement, we dive into a bit of theory: we discuss regularization, dealing with sparsity, and many other essential concepts and principles. We end with an overview of prediction and model monitoring and how Veex AI can be used to manage ML models

● Module 1: Understanding the ML Enterprise Workow
● Module 2: Data in the Enterprise
● Module 3: Science of Machine Learning and Custom Training
● Module 4: Veex Vizier Hyperparameter Tuning
● Module 5: Prediction and Model Monitoring Using Veex AI
● Module 6: Veex AI Pipelines
● Module 7: Best Practices for ML Developmen

2024-03-10

Feature Engineering | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: Feature Engineering

Feature Engineering - Mar 9, 2024
Summary

Want to know about Veex AI Feature Store? Want to know how you can improve the
accuracy of your ML models? What about how to nd which data columns make the most
useful features? Welcome to Feature Engineering, where we discuss good versus bad
features and how you can preprocess and transform them for optimal use in your models.
This course includes content and labs on feature engineering using BigQuery ML, Keras, and
TensorFlow.

2024-03-07

Задача для зображень "grey to rgb" у моделі #keras

Ось чому не можна використати FC з активатором "ReLU" для  цієї задачі: 

layers.Dense(3, activation="relu", name="gray_rgb", input_shape=(32,32,1))

FC, relu
Найкраще зробити підготовку dataset:
tx = np.repeat(x, 3, axis=-1)
або
tx = np.tile(x, (1, 1, 3))
Або шар Lambda (але я питання по збереження моделі до файлу):
layers.Lambda(lambda x: tf.repeat(x, 3, axis=-1))
Grey to RGB
rows = 4
plt.figure(figsize=(10,3*rows))
cols = rgb_images_train.shape[-1]
total = cols * rows
labels = ["R","G","B"]
for i in range(total):
    plt.subplot(rows,cols,i+1)
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    id = i % cols
    rid = i // cols
    plt.imshow(rgb_images_train[0+rid,:,:,id], cmap=plt.cm.binary)
    plt.ylabel(f"Image {rid}, label: {np.argmax(y_train[rid])}")
    plt.xlabel(f"chanel {id} : '{labels[id]}'")
plt.show()
Або вже шар Conv2D:
layers.Conv2D(3, (1, 1), use_bias=False, padding="same", kernel_initializer="ones", name="conv2d_108", input_shape=(32,32,1))
Conv2D 1х1
activation_model = Model(inputs=model.input, 
                         outputs=[layer.output for layer in model.layers])

activations = activation_model.predict(x_test[0].reshape(1, 32, 32, 1))

for layer_index, layer_activation in enumerate(activations):
    print(f"{layer_index=}, {layer_activation.shape=}")
    if len(layer_activation.shape) == 4:  
        num_features = layer_activation.shape[-1]
        size = layer_activation.shape[1]

        rows = num_features // 1  
        cols = layer_activation.shape[-1]

        plt.figure(figsize=(16, 12))
        for i in range(num_features):
            plt.subplot(rows, cols, i + 1)
            img = layer_activation[0, :, :, i]
            plt.imshow(img, cmap='viridis')
            plt.axis('off')
            print("min:", np.min(img), "max",np.max(img))
        plt.tight_layout()
        plt.subplots_adjust(top=0.94)
        plt.suptitle(f'Layer {activation_model.layers[layer_index+1].name} Feature Maps')
        plt.show()

2024-03-06

TensorFlow on Google Cloud | Google Cloud Skills Boost

Кроки для для здобуття необхідних навичок для спеціальностей з напрямку AI & Data на платформі Google Cloud Skills Boost завдяки можливості надданій Google Ukraine.

Курс: TensorFlow on Google Cloud

TensorFlow on Google Cloud. Mar 5, 2024

Summary

This course covers designing and building a TensorFlow input data pipeline, building ML models with TensorFlow and Keras, improving the accuracy of ML models, writing ML models for scaled use, and writing specialized ML models.



#MachineLearning #MachineLearningModels #MachineLearningPipeline

BADGES



2024-03-01

Згорткові нейронні мережі (Conv), що таке знайоме :)

Ознайомлюючись з лекцію про "Згорткові нейронні мережі" (Conv) в темі "Python Data Sciense" школи GoIT. Думаю що ж таке знайоме.

convolutional neural network

Знаходжу код далекого 2014 року, де у браузері, в той час Chrome NaCL мав можливість виконувати порогами клієнта на С, компілювавши на стороні клієнта файл (.pexe), писали код котрий покращував зображення відео на "льоту" через OpenGL Shaders.

(Capture кадр відео, і поверх відео малював  OpenGL зображення, а оригінальне відео не було видно.)

Так от там і були операції як раз такі самі як у Conv kernel 3х3 , і середнє потім забиралося. 😀

const char kFragShaderSource[] =  "precision mediump float;\n"
 "uniform sampler2D u_texture;\n"
 "uniform float imgWidth;\n"
 "uniform float imgHeight;\n"
 "varying vec2 v_texcoord;\n"
 "float kernel[9];\n"
 "vec2 offset[9];\n"
 "float step_w = 1.0/imgWidth;\n"
 "float step_h = 1.0/imgHeight;\n"
 "void main() {\n"
 "offset[0] = vec2(-step_w, -step_h);\n"
 "offset[1] = vec2(0.0, -step_h);\n"
 "offset[2] = vec2(step_w, -step_h);\n"
 "offset[3] = vec2(-step_w, 0.0);\n"
 "offset[4] = vec2(0.0, 0.0);\n"
 "offset[5] = vec2(step_w, 0.0);\n"
 "offset[6] = vec2(-step_w, step_h);\n"
 "offset[7] = vec2(0.0, step_h);\n"
 "offset[8] = vec2(step_w, step_h);\n"
 "kernel[0] = 0.;\n"
 "kernel[1] = -.4;\n"
 "kernel[2] = 0.;\n"
 "kernel[3] = -.4;\n"
 "kernel[4] = 2.6;\n"
 "kernel[5] = -.4;\n"
 "kernel[6] = 0.;\n"
 "kernel[7] = -.4;\n"
 "kernel[8] = 0.;\n"
 "vec4 sum = vec4(0.0);\n"
 "int i;\n"
 "for (i = 0; i < 9; i++) {\n"
 "vec4 color = texture2D(u_texture, (vec2(1.0,1.0)-v_texcoord) + offset[i]);\n"
 "sum += color * kernel[i];\n"
 "}\n"
 "gl_FragColor = sum;\n"
 "}\n";

kernel:

 0.0  -0.4   0.0
-0.4   2.6  -0.4
 0.0  -0.4   0.0
Так як паддінгу не було, на жаль в коді.
То по периметру були проблеми на 1 піксель :)
"offset[0] = vec2(-step_w, -step_h);\n"
Коли забув ти рідну мову, біднієш духом ти щодня...
When you forgot your native language you would become a poor at spirit every day ...

Д.Білоус / D.Bilous
Рабів до раю не пускають. Будь вільним!

ipv6 ready