Starting from 2023, this blog is moving to GitHub to continue on my personal web site and blog management.
Thank you Blogger.com This is a long journey 16 years. Let's continue on https://myseq.github.io/
Starting from 2023, this blog is moving to GitHub to continue on my personal web site and blog management.
Thank you Blogger.com This is a long journey 16 years. Let's continue on https://myseq.github.io/
What is the difference between Cybersecurity and Database engineering?
Cybersecurity specialists thinks 24 hours, and database engineer thinks end of every quarter.
Here is a Venn diagram that shows the difference between Information Security, Security Resilience, and Cybersecurity.
_____________
/ \
/ Cyber- \
| security |
\_____________/
| |
| |
_____________________
| |
| Security Resilience |
|_____________________|
| |
| |
___________________
| |
| Information Security |
|___________________|
The diagram shows that Cybersecurity is a subset of Security Resilience,
as Security Resilience encompasses not just Cybersecurity but also
physical security, incident response planning, and business continuity
planning. Similarly, Information Security is a subset of Security
Resilience, as Security Resilience encompasses a broader set of
security-related activities beyond just information security.
Yes, there is a difference between information security and cybersecurity, although the two terms are often used interchangeably.
Information security is a broader term that encompasses the protection of all forms of information, both digital and non-digital, from unauthorized access, use, disclosure, disruption, modification, or destruction. This includes physical security measures, such as locks and access controls, as well as technical and administrative controls, such as encryption and policies and procedures.
Cybersecurity, on the other hand, specifically refers to the protection of digital information and systems from cyber threats, such as cyber attacks, hacking, malware, and other forms of unauthorized access, use, or disclosure. It involves the use of technical measures, such as firewalls, intrusion detection systems, and encryption, to secure digital assets.
In summary, information security is a broader concept that includes both physical and digital security, while cybersecurity is a specific subset of information security that focuses solely on digital security.
Jekyll is one of the most popular generators for static websites and is based on Ruby. To realize the actual website, the generator uses CSS, HTML and Markdown. Jekyll also offers easy migration from WordPress or other systems to the new environment.
Here the steps I followed to setup Jekyll on Ubuntu.
Let's start with installing Ruby and prerequisites.
$ sudo apt install ruby-full build-essential zlib1g-dev
Then, setup the gem installation directory in the BASH startup.
$ echo '# Install Ruby Gems to ~/gems' >> ~/.bashrc
$ echo 'export GEM_HOME="$HOME/gems"' >> ~/.bashrc
$ echo 'export PATH="$HOME/gems/bin:$PATH"' >> ~/.bashrc
$ source ~/.bashrc
Next, install Jekyll and Bundler:
$ gem install jekyll bundler
Now, just clone from the GitHub with the theme Chirpy.
$ git clone https://github.com/cotes2020/jekyll-theme-chirpy
Last, install the dependencies and run local server.
$ cd jekyll-theme-chirpy
$ bundler
$ bundle exec jekyll serve --host 0.0.0.0
http://0.0.0.0:4000/ |
Finally, Google has released the OSV-scanner as a free tool that gives opensource developers access to vulnerability information which may relevant to their projects.
With the new launching OSV.dev service, it allows all the different opensource ecosystems and vulnerability databases to publish and consume information in one simple, precise, and machine readable format (JSON).
OSV-scanner is an effort to provide supported fronted to the OSV database (OSV.dev) that connects a project's list of dependencies with vulnerabilities that affect them.
There are a few ways to use OSV:
So, let's get start running the OSV-scanner on your project to find all the dependencies that are being used by analyzing manifests, SBOMs, and commit hashes. The scanner hen connects this information with the centralized OSV database and displays the vulnerabilities relevant to your project.
Links:
OpenSSF Scorecard is one of the initiative from Open Source Security Foundation or OpenSSF. It is a tool to provide quick access to opensource projects for any risky practices via automated checks.
To run the checks, there are 2 ways:
Scorecard checks for vulnerabilities affecting different parts of the software chain including source code, build, dependencies, testing, and project maintenance.
Links:
My Ubuntu 22.04 (WSL) comes with Python 3.10.6, and I need to upgrade it to 3.11 for a workshop. (More importantly is, it claims to be 10-60% faster than the previous 3.10. 😎
Here are the steps:
$ sudo add-apt-repository ppa:deadsnakes/ppa
$ sudo apt update
$ sudo apt install python3.11-full
$ python3.11 --version
Python 3.11.1
Next. To set Python 3.11 as default.
$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 110
$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 100
$ sudo update-alternatives --config python3
#!/usr/bin/evn python3
# -*- coding: utf-8 -*-
import httpx
import requests
In general, both the module are similar, Here, I just make a simple comparison on what are the differences between Python HTTPX and Requests module.
Requests | HTTPX | |
---|---|---|
Sessions | requests.Session() | httpx.Client() |
Async support |
Not supported |
httpx.AsyncClient() |
HTTP/2 support |
Not supported | httpx.Client(http2=True) httpx.AsyncClient(http2=True) |
I have started moving over to HTTPX since Dec 2022.
Links:
Everyone know RBAC is important. And this is one of the best webinar that demonstrate how the best practices in designing RBAC.
Notes:
Top 10 vendors and vulnerable products |
CISA starts to share KEV catalog to public back in Nov 3, 2021. There are total of 860 cve been added into KEV catalog after 13 months (849 cve by Nov 3).
Too many organizations are relying on the Common Vulnerability Scoring System, developed at FIRST.org, to decide when it is time to patch.Vulnerabilities with a Low/Medium CVSS score are often ignored completely or deferred to another time, while a vulnerability with a 7.0 and above generates a hair-on-fire “patch now” event.
And this is the reason why patches just don’t get applied in a timely fashion all the time.
It is time we reexamine each of our vulnerability management programs to assure we are not letting impactful and known CVEs continue to exist in our networks long past the time that vendor fixes are available. We need to evolve our practices to incorporate capabilities such as KEV into our operational vulnerability analysis decision making.
The screenshot above shows the top 10 vulnerable products and the vendors within the KEV catalog. And I have shared the script at GitHub back in April 2022.
Links:
Let's make some hacking/cmdline fun on ChatGPT.
Do you know wha is the OS, how much memory and hard disk size used by ChatGPT? It is running on
😮 😮 😮 😮 😮 😮
First login to ChatGPT at https://chat.openai.com/chat with Google account.
Second, enable the terminal by paste into ChatGPT:
I want you to act as a Linux terminal. I will type commands and you will reply with what the terminal should show. I want you to only reply with the terminal output inside one unique code block, and nothing else. Do not write explanations. Do not type commands unless I instruct you to do so. When I need to tell you something in English I will do so by putting text inside curly brackets {like this}. My first command is pwd.
Next, we can continue with all cmdline that we are familiar:
lsb_release -a
|
fdisk -l /dev/sda |
cat /etc/passwd |
cat /etc/shadow |
uptime |
Personally, I don't think the information above is true, but it is fun to see this sometimes. 😇
Links:
ChatGPT is a language model developed by OpenAI. GPT-3 stands for "Generative Pretrained Transformer 3" and is a type of artificial intelligence (AI) that is designed to generate human-like text.
ChatGPT is specifically designed to be used in chatbot applications, where it can generate natural-sounding responses to user inputs.
screenshot taken |
ChatGPT can remember what we said, and allow for follow up questions. Such as:
chatGPT |
秋天的诗 |
Let's start to get some fun.
Links:
OpenAI created a tool to generate AI images and make it available to everyone on Internet. The tool is called DALL-E 2.
Login to DALL-E 2 at https://openai.com/dall-e-2/ with Google account. And just type in any description to generate image, such as:
an old man and a dog walking at beach
We can also add append some keywords to be more specific, such as:
an old man and a dog walking at beach, line art
Keywords can be:
an old man and a dog walking along beach, oil painting |
Links:
Goto edge://flags at URL bar, and enable the following:
1. Enhance text contrast
edge://flags/#edge-enhance-text-contrast
2. Show block option in autoplay settings
edge://flags/#edge-autoplay-user-setting-block-option
3. Show Windows 11 visual effects in the title bar and toolbar
edge://flags/#edge-visual-rejuv-mica
4. Assigns the Backspace key to go back a page
edge://flags/#edge-backspace-key-navigate-page-back
5. Rounded tabs
edge://flags/#edge-visual-rejuv-rounded-tabs
Links:
To start a notepad.exe process as normal user:
c:\> notepad.exe
To start a notepad.exe process as normal user with PowerShell:
PS> Start-Process notepad
To open a file as Administrator with PowerShell:
Start-Process 'notepad' -Verb runAs -ArgumentList c:\windows\system32\drivers\etc\hosts
To simulate 'sudo' with PowerShell Cmdlet
-----------8<------------------
function sudo
{
if ($args.Count -gt 0)
{
$lastIndex = $args.Count-1
$programName = $args[0]
if ($args.Count -gt 1)
{
$programArgs = $args[1 .. $lastIndex]
}
Start-Process $programName -Verb runAs -ArgumentList $programArgs
}
else
{
if ($env:WT_SESSION) {
Start-Process "wt.exe" -Verb runAs
}
elseif ($PSVersionTable.PSEdition -eq 'Core')
{
Start-Process "$PSHOME\pwsh.exe" -Verb runAs
}
elseif ($PSVersionTable.PSEdition -eq 'Desktop')
{
Start-Process "$PSHOME\powershell.exe" -Verb runAs
}
}
}
Set-Alias -Name su -Value sudo
-----------8<------------------
To use the cmdlet:
PS> sudo notepad c:\windows\system32\drivers\etc\hosts
Links:
Let's learn the zero trust segmentation for network, process, and file access within K8s cluster with Tracy Walker.
Threat-Based Controls | Zero-Trust Controls |
---|---|
CVEs | Automated Learning |
DLP | Network |
Network Attacks | Process |
OWASP Top 10 WAF | File Access |
Admission Control | Security as Code |
The Automated Behavioral-based Zero-Trust covers:
The demo will show how Zero Trust can protect against zero-day attacks as well as exploits such as Log4j and Spring4shell.
Links:
Learn how the docker/container network works.
Different Docker Network Types:
Interface | Description |
---|---|
eth0 | VM host network interface |
docker0 | Virtual bridge interface (switch) |
Show the default docker network interface
ubuntu@docker:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
e2397b67991e bridge bridge local
f6648d670e10 host host local
031ec528726f none null local
ubuntu@docker:~$
Start the first container () with default bridge driver.
ubuntu@docker:~$ docker run -itd --rm --name dnet_bridge busybox
e05bdb96427b458d649c0ca8eb6d800a50dde48c6619df34121f3f6c29b36f6f
ubuntu@docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e05bdb96427b busybox "sh" 5 seconds ago Up 4 seconds dnet_bridge
ubuntu@docker:~$
By default, the bridge network applies NAT masq for accessing to external but never expose the container to external network. We need to expose the port if we need the external network to access to our docker container.
ubuntu@docker:~$ docker run -itd --rm -p80:80 --name web01 nginx
e83d9abbea4a909f579a0461c9fb04a8247dd42100b7be08cd701cf9740d856c
ubuntu@docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e83d9abbea4a nginx "/docker-entrypoint.…" 4 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp web01
13f8d2d6f05f busybox "sh" 4 minutes ago Up 4 minutes dns01
e05bdb96427b busybox "sh" 9 minutes ago Up 9 minutes dnet_bridge
ubuntu@docker:~$
Second. Let's define our own bridge network. This is mainly for segregating (isolation) the containers.
ubuntu@docker:~$ docker network create dmz
71a335a2c869afde71ff4d6debf5155b319e65894c7c83dcea1b1d6e208eb882
ubuntu@docker:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
e2397b67991e bridge bridge local
71a335a2c869 dmz bridge local
f6648d670e10 host host local
031ec528726f none null local
ubuntu@docker:~$ docker run -itd --rm --network dmz -p80:80 --name web01 nginx
9ddc5bd9c13c884237aa7164a4c4f3c17498a68da64c735879eaf479c397a433
ubuntu@docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9ddc5bd9c13c nginx "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp web01
e05bdb96427b busybox "sh" 16 minutes ago Up 16 minutes dnet_bridge
ubuntu@docker:~$
Third. We call it host network. This makes the container runs on the same network as the VM host.
ubuntu@docker:~$ docker run -itd --rm --network host --name web02 nginx
3022063adc651f94e23edd8755c7c9521f40a7b2df157bfc92c66f21016d3842
ubuntu@docker:~$
Forth. We call it MAC-VLAN (bridge mode).
ubuntu@docker:~$ docker network create -d macvlan --subnet 172.31.112.0/20 --gateway 172.31.112.1 -o parent=eth0 vlan1
373a821c44aefb4030109482f9480008bf87a152ad74a6c714cbeaa57f73e6dc
ubuntu@docker:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
e2397b67991e bridge bridge local
71a335a2c869 dmz bridge local
f6648d670e10 host host local
031ec528726f none null local
373a821c44ae vlan1 macvlan local
ubuntu@docker:~$
ubuntu@docker:~$ sudo ip link set eth0 promisc on
ubuntu@docker:~$
Fifth. We call it MAC-VLAN (802.1q mode).
ubuntu@docker:~$ docker network create -d macvlan --subnet 192.168.20.0/24 --gateway 192.168.20.1 -o parent=eth0.20 vlan20
3634f36fe849afa8d7dfc65589b71aa0c0902bd6bc1ed294e0d258ffc14e640f
ubuntu@docker:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
e2397b67991e bridge bridge local
71a335a2c869 dmz bridge local
f6648d670e10 host host local
031ec528726f none null local
373a821c44ae vlan1 macvlan local
3634f36fe849 vlan20 macvlan local
ubuntu@docker:~$
ubuntu@docker:~$ docker run -itd --rm --network vlan3 --ip 192.168.94.7 --name dns01 busybox
de504908dc372c0f017a36c4357c70a1f28acd0a7f763bb372642c96e89baef9
ubuntu@docker:~$ docker run -itd --rm --network vlan3 --ip 192.168.94.8 --name dns02 busybox
2dc61bd9a45f828493fe1b55f8786692740baf5079deeddb5cefebe2468aa583
ubuntu@docker:~$ docker run -itd --rm --network vlan3 --ip 192.168.95.9 --name web01 busybox
a1d23a1691d0c2fd33b03d023bc03bb0a282e39a8f254bdf54fbab4d3e46a9de
ubuntu@docker:~$ docker run -itd --rm --network vlan3 --ip 192.168.95.10 --name web02 busybox
9cc2db6492de35f5a2fa230702e5e41ff4bf75bd563eac71bf39d0e7171b0e0f
ubuntu@docker:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9cc2db6492de busybox "sh" 4 seconds ago Up 3 seconds web02
a1d23a1691d0 busybox "sh" 13 seconds ago Up 13 seconds web01
2dc61bd9a45f busybox "sh" 35 seconds ago Up 35 seconds dns02
de504908dc37 busybox "sh" About a minute ago Up About a minute dns01
ubuntu@docker:~$
Sixth. We call it IP-VLAN (L2) - layer_2. This will share the same mac address with the VM host, and must allow 1 mac addr with 20 IP addresses associated on the network.
ubuntu@docker:~$ docker network create -d ipvlan --subnet 172.31.112.0/20 --gateway 172.31.112.1 -o parent=eth0 vlan2
40aadb9f60c3dc889c8b9a30e627d5a314226c204ca48f09375447def53b4ad4
ubuntu@docker:~$
Seventh. We call it IP-VLAN (L3) - layer_3. Everything is connecting to host and host is functioning like router. And we have more control on the traffic.
ubuntu@docker:~$ docker network create -d ipvlan --subnet 192.168.94.0/24 -o parent=eth0 -o ipvlan_mode=l3 --subnet 192.168.95.0/24 vlan3
000b2c4799a4fd62a4435d99eed592ae8fa7ad5b8b797aeb7e06322b477f7ecf
ubuntu@docker:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
e2397b67991e bridge bridge local
71a335a2c869 dmz bridge local
f6648d670e10 host host local
031ec528726f none null local
000b2c4799a4 vlan3 ipvlan local
ubuntu@docker:~$
* Need to add static route at the router in order for the network to reach back to vlan3.
Eighth. We call it Overlay network. And it is used to link up multiple host, create an overlay network, and create rule to allow the containers (at different host) to talk to each other.
Usually it is used with Docker Swarm.
Last (9th) is None network.
ubuntu@docker:~$ docker run -itd --rm --network none --name xnet busybox
0c21ccbb87d1937dd7ce18da696a5bd7ca1530969a4198992e5852e3d0593d14
ubuntu@docker:~$
Links:
Let's follow the steps to create more complex docker images:
First, we just start a docker images with Multipass.
PS> multipass launch docker -n kiko
Login to docker (kiko) and start creating docker-compose.yaml.
PS> multipass shell kiko
ubuntu@kiko:~$ mkdir blog && cd blog
ubuntu@kiko:~/blog$ vi docker.compose.yaml
---------------------------------------------------
version: "3"
services:
frontend:
image: wordpress
ports:
- "8089:80"
depends_on:
- backend
environment:
WORDPRESS_DB_HOST: backend
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: "coffee"
WORDPRESS_DB_NAME: wordpress
networks:
dmz:
ipv4_address: "192.168.33.89"
backend:
image: "mysql:5.7"
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: "coffee"
volumes:
- ./mysql:/var/lib/mysql
networks:
dmz:
ipv4_address: "192.168.33.90"
networks:
dmz:
ipam:
driver: default
config:
- subnet: "192.168.33.0/24"
----------------------------------------------------
ubuntu@kiko:~/blog$ docker-compose up -d
ubuntu@kiko:~/blog$ docker-compose ps
ubuntu@kiko:~/blog$ docker network ls
ubuntu@kiko:~/blog$ docker inspect blog_dmz
http://kiko.mshome.net:8089/ |
Links:
Let's follow the steps to create first docker images.
First, we just start a docker images with Multipass.
PS> multipass launch docker -n kiko
Login to docker (kiko) and start creating docker-compose.yaml.
PS> multipass shell kiko
ubuntu@kiko:~$ mkdir coffee && cd coffee
ubuntu@kiko:~/coffee$ vi docker.compose.yaml
---------------------------------------------------
version: "3"
services:
website:
image: nginx
ports:
- "8081:80"
restart: always
----------------------------------------------------
ubuntu@kiko:~/coffee$ docker-compose up -d
ubuntu@kiko:~/coffee$ docker-compose ps
Add second image with different network (coffee).
ubuntu@kiko:~/coffee$ vi docker.compose.yaml
---------------------------------------------------
version: "3"
services:
website:
image: nginx
ports:
- "8081:80"
restart: always
website2:
image: nginx
ports:
- "8082:80"
restart: always
networks:
coffee:
ipv4_address: 192.168.92.22
networks:
coffee:
ipam:
driver: default
config:
- subnet: "192.168.92.0/24"
----------------------------------------------------
ubuntu@kiko:~/coffee$ docker-compose up -d
ubuntu@kiko:~/coffee$ docker network ls
ubuntu@kiko:~/coffee$ docker inspect coffee_default
ubuntu@kiko:~/coffee$ docker inspect coffee_coffee
Links:
Virtualization or hypervisor virtualizes hardware; docker container virtualizes OS kernel.
First, we just start a docker images with Multipass.
PS> multipass launch docker -n kiko
Login to docker and start download the images.
PS> multipass shell kiko
ubuntu@kiko:~$ docker pull centos
ubuntu@kiko:~$ docker container run -itd --name cc centos
ubuntu@kiko:~$ docker exec -it cc bash
[root@a4d5e22b6ef5 /]# cat /etc/os-release
Try download other images.
ubuntu@kiko:~$ docker pull archlinux
ubuntu@kiko:~$ docker pull ubuntu
ubuntu@kiko:~$ docker pull almalinux
ubuntu@kiko:~$ docker run -itd --name uu ubuntu
Check the utilization and stop the container.
ubuntu@kiko:~$ docker stats
ubuntu@kiko:~$ docker stop uu cc
Why container runs so fast and why use container?
Links:
This is a quick tutorial on setting up a Redmine on Docker container.
Redmine is a flexible project management web application written using Ruby on Rails framework.
This is to simulate how to dockerize a production-ready infrastructure on Redmine application using Nginx as reverse proxy.
I'm using the Multipass to setup my docker platform.
PS> multipass launch docker -n dido
PS> multipass shell dido
First, create 3 files within an empty folder.
~$ mkdir red
~$ cd red
~/red$ cat Dockerfile
------------------8<-------------------------
FROM redmine:5
RUN apt update && \
apt install -y \
supervisor \
nginx && \
apt clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY conf/default.conf /etc/nginx/sites-available/default
EXPOSE 80
ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
------------------8<-------------------------
~/red$ cat conf/default.conf
------------------8<-------------------------
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
------------------8<-------------------------
~/red$ cat conf/supervisord.conf
------------------8<-------------------------
[supervisord]
nodaemon=true
user=root
[program:nginx]
user=root
command=nginx
[program:redmine]
user=redmine
directory=/usr/src/redmine
command=/docker-entrypoint.sh rails server -b 127.0.0.1
------------------8<-------------------------
Next, build the docker image called "redapp".
~/red$ docker build -t redapp .
Sending build context to Docker daemon 4.608kB
Step 1/6 : FROM redmine:5
---> 7cc28c5d1864
Step 2/6 : RUN apt update && apt install -y supervisor nginx && apt clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
---> Using cache
---> 03ee1eb12c0a
Step 3/6 : COPY conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
---> Using cache
---> bfaee539e7d4
Step 4/6 : COPY conf/default.conf /etc/nginx/sites-available/default
---> Using cache
---> 8f20ffe3be6a
Step 5/6 : EXPOSE 80
---> Using cache
---> de69fec60e49
Step 6/6 : ENTRYPOINT ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
---> Using cache
---> 3e9b0eecdfaf
Successfully built 3e9b0eecdfaf
Successfully tagged redapp:latest
Next, run the container by exposing the port 80 (external on eth0 interface) with Nginx (internal port 80 on docker0 interface)
~/red$ docker run -p 80:80 -d redapp
4851a3266f50ebd3ee7c3c69e87bc2e4697e74e699839b21f566119c39e5665f
Last, point the browser to the URL at http://172.18.238.107/login (where 172.18.238.107 is the IP address at my eth0 interface).
http://172.18.238.107/login |
Links:
Check or curl your weather at cmdline with :
$ curl -s wttr.in/Melbourne?format="%l:%c+%C+%t/%f+%h+%w+%m+UV:%u/12+%P"
Melbourne:⛅️ Partly cloudy +15°C/+14°C 59% ↑31km/h 🌗 UV:3/12 1016hPa
$ curl -s wttr.in/New+York?format="%l:%c+%C+%t/%f+%h+%w+%m+UV:%u/12+%P"
New+York:☀️ Clear +1°C/-3°C 56% ↓15km/h 🌗 UV:1/12 1022hPa
PS> Invoke-RestMEthod 'https://wttr.in/New+York?format="%l:%c+%C+%t/%f+%h+%w+%m+UV:%u/12+%P"'
New+York:☀️ Clear +1°C/-3°C 56% ↓15km/h 🌗 UV:1/12 1022hPa
Links:
This set of documentation describes the Windows Commands you can use to automate tasks by using scripts or scripting tools.
All supported versions of Windows and Windows Server have a set of Win32 console commands built in.
Links:
Microsoft Teams is power by Electron, SlimCore, Chromium, Node.js, and V8 Javascript engine. (No wonder it sucks up all your memory)
To check your MS Teams version, you have to enter the Dev Mode with the following steps:
Once again, I need to tune my new Firefox browser settings.
Change settings with about:config:
Description | Settings | Values | Default |
---|---|---|---|
To disable disk cache | browser.cache.disk.enable | false | true |
To disable disk cache on SSL | browser.cache.disk_cache_ssl | false | true |
To enable RAM cache | browser.cache.memory.enable | true | true |
To set RAM cache capacity based on 2GB physical memory | browser.cache.memory.capacity | 24576 | -1 |
To view current memory cache usage, put about:cache?device=memory in the Location Bar.
Links:
I was installing my printer driver to my new Windows 11.
And I need a debugger to troubleshoot my printer driver. It is time to get a Windows Debugger for the new OS.
The Windows Debugger (WinDbg) can be used to debug kernel-mode and user-mode code, analyze crash dumps, and examine the CPU registers while the code executes.
Before get start with Windows debugging, we need to complete 2 things.
Seem like the easiest way to get Windows symbols is to use the Microsoft public symbol server. The symbol server makes symbols available to your debugging tools as needed and make it easier to debug your code.
After a symbol file is downloaded from the symbol server it is cached on the local computer for quick access. And Microsoft no longer publishing the offline symbol packages for Windows.
While looking for WinDBG, I also found WinDbg Preview (at MS Store).
WinDbg Preview is the latest version of WinDbg with more modern visuals, faster windows, a full-fledged scripting experience, built with the extensible debugger data model front and center. In short, simply more user friendly.
And the best part is, WinDbg Preview is available in MS Store. Simply run the cmdline below to install it.
C:\> winget install WinDbg --source msstore
Links:
PS> Set-NetIPInterface -ifAlias "vEthernet (WSL)" -Forwarding EnabledPS> Set-NetIPInterface -ifAlias "vEthernet (Default Switch)" -Forwarding Enabled
PS> Get-NetIPInterface | select ifIndex,InterfaceAlias,AddressFamily,ConnectionState,Forwarding | Sort-Object -Property IfIndex | Format-Table
~$ cd .ssh~/.ssh$ ssh-keygen -t ed25519 -C "xx@wsl2"~/.ssh$ ssh-keygen -l -f id_ed25519.pub~/.ssh$ ssh-copy-id -i id_ed25519.pub xx@remote_server
-------------------------------users:- default- name: xxgroups: sudoshell: /bin/bashsudo: ['ALL=(ALL) NOPASSWD:ALL']ssh_authorized_keys:- ssh-rsa <rsa keys in one line>package_update: truepackage_upgrade: truepackages:- nodejs- python3-------------------------------
PS> multipass multipass launch -c 2 -m 2G -d 20G -n ubuntu-vm --cloud-init cloud_init.yaml
PS> winget install python --source msstorePS> where python
Found an interesting tool called wtfis.
wtfis is a commandline tool that gathers information about a domain, FQDN or IP address using various OSINT services.
This tool assumes that you are using free tier / community level accounts, and so makes as few API calls as possible to minimize hitting quotas and rate limits.
Setup
wtfis uses these environment variables:
Installation
$ pip install wtfis
Usage:
$ wtfis -h
usage: wtfis [-h] [-m N] [-s] [-n] [-1] [-V] entity
positional arguments:
entity Hostname, domain or IP
options:
-h, --help show this help message and exit
-m N, --max-resolutions N
Maximum number of resolutions to show (default: 3)
-s, --use-shodan Use Shodan to enrich IPs
-n, --no-color Show output without colors
-1, --one-column Display results in one column
-V, --version Print version number
Links:
Found an interesting repo that shares resources about DevOps exercises and questions. It can be used for preparing for an interview.
It is suitable for anyone who interested in pursuing a career as DevOps engineer, learning the concepts.
Links:
GitOps is an approach to perform cloud operations (in DevOps way) by centralizing the desired state of system into code and enforcing change through automation via version control system (such as Git).
Git acts as a common place where workflows, automation, checks and balances can be applied before entering a production environment, enabling organizations with a crucial foothold to secure by design further than ever before.
By adopting GitOps, it means it is a commitment to interacting only with Git and leaving the integration and deployment jobs to be automated.
By ensuring that everything is code driven and declared, the risk from non-automated agents (a.k.a. humans) can be drastically minimized.
For example, using the automation workflows, you can embed compliance scans to enforce best-practices and regulatory mandates to prevent mis-configurations. With detection of configuration drift, it becomes easier and quicker to isolate vulnerable/compromised resources for investigation.
GitOps can leverage DevSecOps tools, such as IaC scanning, security testing, IAM and secret management. And by bringing a security-as-code and adding compliance requirements and security policies into coded artifacts, organizations can embrace GitSecOps to effectively shift the security left.
Seven steps to a successful GitSecOps approach:
Just like DevSecOps, GitSecOps also requires the adoption of a new mindset and culture to getting things done in a cloud native way.
Sharing common tools, processes and goals — focused on a successful shared outcome rather than an isolated deliverable — ensures that the DevSecOps and GitSecOps goals are aligned to support each other and the organization’s digital transformation vision.
Links:
In this webinar, Stephen will demonstrate the process of downloading Microsoft cumulative updates to extract the patches and prep them for diffing.
It's a very useful way to identify patched vulnerabilities that can potentially be weaponized for exploitation of un-patched systems, as well as learning how vulnerabilities are patched to aid in bug hunting.
My notes:
Links:
After the article on Operation Hates Agile, here comes next, how to move from Operations to GitOps.
IaC is the replacement of traditional operation. It allows enterprises to control changes and manage the configuration settings in cloud environments more efficiently.
First, we need to know what contained inside "Infrastructure as Code" or IaC. There are 3 characteristics in IaC:
Most IaC is declarative in nature. However, we can always make changes to the cloud environment with both imperative or declarative automation.
To make imperative automation changes to cloud infra, we use cmdline interface (CLI). It directs changes to the cloud first within a container, then virtual machine (VM), and then virtual private cloud, through a script. This is a detailed checklist, but if the configuration needs to be changed after the push to multiple machines, the steps and the script would have to be repeat.
A declarative automation approach requires goal creation. For example, rather than using the CLI and listing the exact step-by-step configuration for a VM, you’d simply state that you want a VM with, say, a domain attached, and then let the automation take over. The declarative approach (most of the time in YAML) enables you to more easily state what needs to be accomplished by the automation tools.
Mutable means that it is prone to change. A virtual machine is an example of mutable infrastructure.
Immutable infrastructure cannot be changed once deployed, such as container/docker. Changes will still occur, but they are made to the original declarative statements. Once the changes are ready, all like devices or configurations are changed consistently.
Most of the time, we use both imperative and declarative automation methods interchangeably to manage IaC. This may raise an issue called Configuration Drift.
MHDDoS is a DDoS Attack Script written in Python3. It includes 56 attack methods (DoS/DDoS).
Installation (1st way)
$ git clone https://github.com/MHProDev/MHDDoS.git
$ cd MHDDoS
$ pip install -r requirements.txt
Installation (2nd way)
$ docker pull ghcr.io/mhprodev/mhddos:latest
Links:
It is so convenience to use the command 'multipass shell jimny' whenever we need to access to VM created.
But, how can we login without password? Where is the SSH private key?
Actually it is using SSH public key authentication for login to VM.
Configuring logging on Windows systems, and aggregating those logs into a SIEM, is a critical step toward ensuring that your environment is able to support effective incident response using Incident response tools.
Events can be logged in the Security, System and Application event logs.
Log Name | Event Log where the event is stored. Useful when processing numerous logs pulled from the same system. |
---|---|
Source | The service, Microsoft component or application that generated the event. |
Event ID | A code assigned to each type of audited activity. |
Level | The severity assigned to the event in question. |
User | The user account involved in triggering the activity or the user context that the source was running as when it logged the event. |
OpCode | Assigned by the source generating the log. |
Logged | The local system date and time when the event was logged. |
Task Category | Assigned by the source generating the log. |
Keywords | Assigned by the source and used to group or sort events. |
Computer | The computer on which the event was logged. This is useful when examining logs collected from multiple systems, but should not be considered to be the device that caused an event (remote workstation). |
Description | A text block where additional information specific to the event being logged is recorded. |
Types of Windows Event Log Analysis – Guide
Go thru the complete incident response guide with the following link.
Links:
With Multipass, do I still need VMware Player to run Linux with full privilege, under Windows OS ?
WSL is more common choice of running virtual machine nowadays comparing to VMware Player.
With Multipass, everything seems more easier/faster now. 😇
Here's my story today, on how I need to run nmap port scan to a router.
PS> multipass launch -n scanner
PS> multipass shell scanner
ubuntu@scanner:~$ sudo snap install nmap
ubuntu@scanner:~$ sudo nmap -sU -p 53 192.168.31.1
Starting Nmap 7.93 ( https://nmap.org ) at 2022-10-21 23:17 +08
Couldn't open a raw socket. Error: Permission denied (13)
ubuntu@scanner:~$ sudo snap connect nmap:network-control
ubuntu@suzuki:~$ sudo nmap -sU -p 53 192.168.31.1
Starting Nmap 7.93 ( https://nmap.org ) at 2022-10-21 23:18 +08
Nmap scan report for XiaoQiang (192.168.31.1)
Host is up (0.0027s latency).
PORT STATE SERVICE
53/udp open domain
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
ubuntu@scanner:~$ sudo nmap -n -sS -p 1-1024 192.168.31.1
Starting Nmap 7.93 ( https://nmap.org ) at 2022-10-21 23:35 +08
Nmap scan report for 192.168.31.1
Host is up (0.0075s latency).
Not shown: 1020 closed tcp ports (reset)
PORT STATE SERVICE
53/tcp open domain
80/tcp open http
443/tcp open https
784/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.34 seconds
ubuntu@scanner:~$
With this, I have more confidence with Multipass now. 😉