May 31, 2022

httpX - HTTP toolkit

httpx is a fast and multi-purpose HTTP toolkit allow to run multiple probers using retryablehttp library, it is designed to maintain the result reliability with increased threads.


echo 192.168.233.81:8080 | httpx -probe -title -tech-detect -status-code -jarm


Installation

$ go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest


Links:

May 30, 2022

HTTP Version

How many HTTP versions have you come across? 

We used to be using HTTP/1.0 for years (199x), and we move to HTTP/1.1 and using it for most of the time. And then HTTP working group has introduced to HTTP/2 (aka SPDY) in 2015 and now HTTP/3 (aka QUIC) in 2022.

 HTTP/2 or SPDY, has some new features (like server pushes) and performance improvements (header compression) over HTTP/1.1.

HTTP/3 or QUIC - is a UDP-based stream-multiplexing, encrypted transport protocol that documented under RFC9000. 

As of today, a few web sites have migrated to HTTP/3, for example, www.facebook.com, blog.cloudflare.com and www.google.com

HTTP/3 Check is a hosted QUIC protocol exploration tool used to test whether a server supports the QUIC transport protocol and the HTTP/3 semantics.

Majority of the browsers are supporting HTTP/2 (enabled by default) but not all are supporting HTTP/3 yet. Below are the curl and httx commands to check HTTP version.

$ curl -sI https://www.dell.com -o/dev/null -w '%{http_version}\n'
1.1

$ curl -sI https://www.google.com -o/dev/null -w '%{http_version}\n'
2

$ echo www.youtube.com | httpx -http2 -title -pipeline -vhost

    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   /
 / / / / /_/ /_/ /_/ /   |
/_/ /_/\__/\__/ .___/_/|_|
             /_/              v1.2.1

                projectdiscovery.io

Use with caution. You are responsible for your actions.
Developers assume no liability and are not responsible for any misuse or damage.
https://www.youtube.com [YouTube] [vhost] [http2]

 

Links:

May 29, 2022

DNSX - A Multi-purpose DNS Toolkit

dnsx is a fast and multi-purpose DNS toolkit allow you to perform multiple DNS queries with the supports of DNS wildcard filtering.

It supports:

  • DNS resolution and brute-force mode
  • Multiple resolver formats (TCP/UDP/DOH/DOT)
  • Automatic wildcard handling


Installation

$  go install -v github.com/projectdiscovery/dnsx/cmd/dnsx@latest


Links:

May 28, 2022

Cobalt Strike and Pentest

Cobalt Strike is a commercial penetration testing tool, which gives security testers access to a large variety of attack capabilities. It can be used to conduct spear-phishing and gain unauthorized access to systems, and can emulate a variety of malware and other advanced threat tactics.

This powerful network attack platform combines social engineering, unauthorized access tools, network pattern obfuscation and a sophisticated mechanism for deploying malicious executable code on compromised systems. It can now be used by attackers to deploy advanced persistent threat (APT) attacks against any organization. 

This threat emulation program has the following capabilities:

  • Reconnaissance—discovers which client-side software your target uses, with version info to identify known vulnerabilities.
  • Attack Packages—provides a social engineering attack engine, creates trojans poised as innocent files such as Java Applets, Microsoft Office documents or Windows programs, and provides a website clone to enable drive-by downloads.
  • Collaboration—Cobalt Team Server allows a group host to share information with a group of attackers, communicate in real time and share control of compromised systems.
  • Post Exploitation—Cobalt Strike uses Beacon, a dropper that can deploy PowerShell scripts, log keystrokes, takes screenshots, download files, and execute other payloads.
  • Covert Communication—enables attackers to modify their network indicators on the fly. Makes it possible to load C2 profiles to appear like another actor, and egress into a network using HTTP, HTTPS, DNS or SMB protocol.
  • Browser Pivoting—can be used to get around two-factor authentication.


It is also interesting task to detect Cobalt Strike even it is difficult to do so most of the time, such as 50050/tcp, DNS with bogus reply, TLS cert, etc.

Cobalt Strike is also a post-exploitation framework tool developed for ethical hackers. It gives a post-exploitation agent and covert channels to emulate an embedded actor in your customer’s network.

It can be extended and customized by the user community. Several excellent tools and scripts have been written and published, but they can be challenging to locate. 

Cobalt strike is a premium product. However, like Metasploit, there’s a free community edition called Community Kit

Community Kit is a central repository of extensions written by the user community to extend the capabilities of Cobalt Strike. The Cobalt Strike team acts as the curator and provides this kit to showcase this fantastic work.


Links:

May 27, 2022

Invert a Complex Dictionary in Python

Have you ever need to invert a dictionary with value being a list in Python?

Here is what I mean. We need to convert the following dict:

{ "apple"      : [ "green", "red" ], 

  "watermelon" : [ "green" ], 

  "strawberry" : [ "red" ], 

  "lemon"      : [ "green", "yellow" ] }

To a new dict as below:

{ 'green'  : ['apple', 'watermelon', 'lemon'], 

  'red'    : ['apple', 'strawberry'], 

  'yellow' : ['lemon'] } 


Here's my solution which using the defaultdict from collections:

>>> from collections import defaultdict

>>> a_dict = { "apple" : [ "green", "red" ], "watermelon" : [ "green" ], "strawberry" : [ "red" ], "lemon" : [ "green", "yellow" ] }

>>> b_dict = defaultdict(list)

>>> for k,v in a_dict.items():

...    for k1 in v:

...        b_dict[k1].append(k)

...

>>> b_dict

defaultdict(<class 'list'>, {'green': ['apple', 'watermelon', 'lemon'], 'red': ['apple', 'strawberry'], 'yellow': ['lemon']})


There is another way that simplies what we have above:

>>> from collections import defaultdict

>>> a_dict = { "apple" : [ "green", "red" ], "watermelon" : [ "green" ], "strawberry" : [ "red" ], "lemon" : [ "green", "yellow" ] }

>>> b_dict = defaultdict(list)

>>> { b_dict[k1].append(k) for k,v in a_dict.items() for k1 in v }

>>> b_dict


Alright, hope this helps!!

May 18, 2022

Docker HelloWorld_2

Continue from the hello_world_1, here comes another notes on running multiple containers with docker compose.

With Docker compose, we can configure and start multiple containers with a single yaml file. This is helpful for a technology stack with multiple technologies.

As an example, say that you are working on a project that uses a MySQL database, Python for AI/ML, NodeJS for real-time processing, and .NET for serving API’s. Docker makes this easier with the help of compose.

docker compose is a yaml file in which we can configure different types of services. Then with a single command all containers will be built and fired up.

There are 3 main steps involved in using compose:    

  • Generate a Dockerfile for each project.    
  • Setup services in the docker-compose.yml file.    
  • Fire up the containers.

Here is the full copy of docker-compose.yml.

docker-compose.yml:

version: '3.4'
services:
  super-app-db:
    image: mysql:8.0.28
    environment:
      MYSQL_DATABASE: 'super-app'
      MYSQL_ROOT_PASSWORD: 'password'
    ports:
      - '3306:3306'
    expose:
      - '3306'
  super-app-node:
    build: ./node
    ports:
      - "3000:3000"
  super-app-dotnet:
    build: ./dotnet
    ports:
    - "8080:80"
  super-app-python:
    build: ./python
  super-app-php:
    build: ./php
    ports:
    - "8000:80"

Then, we need to:

  • Configure MySQL
  • Configure NodeJS
  • Configure .NET 
  • Configure Python
  • Configure PHP

 

Links:

May 17, 2022

Docker HelloWorld_1

Containers are the building blocks of docker. Here's the note will go through various commands that will help manipulate and create containers, to give better understanding of how docker works.

First, create 3 files: Dockerfile, package.json, server.js

Dockerfile:

FROM node:17-slim
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]
EXPOSE 3000

package.json:

{
    "name": "yay-docker",
    "dependencies": {
        "express": "^4.17.1"
    }
}

server.js:

const server = require("express")();
server.listen(8080, async () => { });
server.get("/nodejs", async (_, response) => {
  console.log('Request Received for nodejs');
  response.json({ "yay": "docker" });
});


Create an image and name it "simple_nodejs" from current folder. It will start multiple layers for downloading the node:17-slim image. Once done it will run npm install to install dependencies like express from your package.json file.

docker build -t simple_nodejs .

Run the image as container named "simple_nodejs" at port 3000 (external) while the node app running at port 8080 (internal).

docker run -p 3000:8080 -d simple_nodejs 

Now, we can test the container by browsing to http://localhost:3000/ 

List running containers

docker ps -a

Prints container logs

docker logs <container-id>

Access the container

docker exec -it <container-id> /bin/sh 

Start, stop, and remove container:

docker start <container-id>

docker stop <container-id>

docker rm <container-id>

 

Links:

May 16, 2022

Customizing VM at Launch with cloud-init

Multipass is great. With cloud-init, it is getting better. 

With cloud-init we can customize our virtual machine at launch when we create them with Multipass.

First, create a cloud-init.yaml file with the content below:

ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAAD........./FAC8DD2xi2pZZc3Dnv/6iE= xx@pf

Next, create the VM with the command below, and list the IP address:

$ multipass launch --name jj --cloud-init cloud-init.yaml

$ multipass list

Last, just ssh into the VM

$ ssh -l ubuntu 10.163.216.36

ssh into VM


May 15, 2022

First Try on Multipass

Multipass is simple a docker alternative from Canonical projects. It is a lightweight cross-platform VM manager, and is designed for developers who want a fresh Ubuntu environment with a single command.

A cloud-init can be used for post-install configuration, such as setup SSH public key or mounting a disk.

$ sudo snap install multipass --classic 

$ sudo snap refresh multipass --channel stable

$ multipass find

$ multipass launch --name kk focal

$ multipass ls

$ multipass ls --format json 

$ multipass shell kk 

$ multipass launch --name jj --cpus 2 --mem 2G 

$ multipass ls --format yaml

$ multipass info kk

$ multipass exec kk -- lsb_release -a 

$ multipass list 

$ multipass stop jj kk

$ multipass list 

$ multipass delete --purge kk

$ multipass delete --purge jj 

So fun 😜😜 and I love its' simplicity.


Links:

May 14, 2022

Adding Google Pinyin

This is my quick note on adding Google Pinyin in[ut method on my Ubuntu.

  1. Login as root.
  2. Add Chinese locales:
    • # dplg-reconfigure locales
  3. Install the required packages:
    • # apt install fcitx fcitx-pinyin fcitx-googlepinyin
  4. Run 'im-config' and change it to 'fcitx'.
  5. Reboot 
  6. Run 'fcitx-configtool'
    1. Click "+" to add new input method
    2. Uncheck "Only Show Current Language"
    3. Search for "google" to add Google Pinyin
  7. In "Global Config" tab, select the shortcut keys for switching input methods with Google Pinyin.

 

Links:

May 13, 2022

History in ZSH

I switched to zsh from bash many years ago. 

Today, I like to share a few tips on using zsh, and those are included in my dotfiles.


How to SKIP a cmdline from .zsh_history

$ echo "This cmdline saved in history."

$  echo "This cmdline will not be saved in history (bcos of leading space. Try it."


Show datetime and elapsed times in history

I would like to show the datetime and the elapsed time of my cmdline in my history. Thus I modified the alias in my .zshrc:

alias history='history -i -D 0'

Where -i is for datetime in iso-format, -D is for showing elapsed times spent, and 0 is for showing all the history.


To show history from all the login terminals

I do open multiple terminals for work, and I would like to see the history of cmdline from all the terminals. Thus I enable the setopt in my .zshrc:

setopt share_history

 

May 12, 2022

Mounting Disk Image File

After we create disk image file with dd, we have two options: restore it, or mount he file directly.

Once we finished cloning from a partition to a disk image file as below:

# dd if=/dev/sdb1 of=/mnt/share/disk_120G.img

We can mount the file with the following instruction to access some of the files without restoring the whole partition to a drive.

# file disk_120G.img

# fdisk -l disk_120G.img

Check what is the start sector. Let say the filesystem starts on sector 63 (from the fdisk output). Each sector is always 512 bytes long, thus we will use an offset of 32256 (63*512) bytes. 

# losetup -f

# losetup --offset 32256 /dev/loop2 disk_120G.img

# mount /dev/loop2 /mnt/point


Links:

May 11, 2022

Restrict SSH Users to Run Limited Commands

This is the note to focus on how to restrict SSH users from executing certain commands once they successfully log in to a remote OpenSSH server.


Setup key-based authentication

$ key-keygen

$ ssh-copy-id login@remote-ssh-server

On the remote SSH server, a file called 'authorized_keys' should be created at ~/.ssh. We should see the copied public key.

$ ssh login@remote-ssh-server

$ cd ~/.ssh

$ cat authorized_keys

 

Restrict Execution in 'authorized_keys' file

To restrict a user to execute the 'ls' command on this server, we can modify the authorized_keys file in the following manner:

from="192.168.233.84",command="/usr/bin/ls" ssh-rsa AAAABBB.......

The entry above will point to the IP address and specified the only command to be executed.

Once we login to the remote SSH server, the ls command will execute and the connection will be closed.

We can create a BASH script and restrict the execution to the BASH script which provide limited command only.


Links:

May 10, 2022

Understanding and Getting Started with ZERO TRUST

 

A look at what Zero Trust really is and how to get started by John.

My take away (ZT principles):

  • Verify explicitly (on every single session or resources)
  • Least privilege (just enough and in time) 
  • Assume breach

Other notes: 

  • A wrong VPN deployment may degrade the security in overall.
  • IAM with SSO (MFA, Passwordless, disable legacy auth, RBAC)
  • Endpoints (TPM, TLS cert, register-managed-compliant)
  • Network (defense-in-depth, end-to-end encryption/IPSec, layers/tiers - microsegmentation, IDS/IPS) 
  • Risk Context and controls (Identity, endpoint, network, conditional access)
  • Infra and apps (policy, shadow IT, proxy)
  • data (data driven protection and travel with data, encryption, classification, Azure Purview)
  • SIEM/SOAR (Azure Sentinel + ML + automation)
 

May 9, 2022

EPSS for Better Vulnerability Management OSINT Strategy

EPSS is a measure of exploitability. Specifically, EPSS is estimating the probability of observing any exploitation attempts against a vulnerability in the next 30 days. 

This is accomplished by observing and recording exploitation attempts against vulnerabilities and then collecting as much information about each vulnerabilities. 

EPSS is best used when there is no other evidence of active exploitation. When evidence or other intelligence is available about exploitation activity, that should supersede the EPSS estimate.

EPSS does not account for any specific environmental, or compensating controls, and it does make any attempt to estimate the impact of a vulnerability being exploited. EPSS should not be treated as a complete picture of risk, but it can be used as one of the inputs into risk analyses.

In vulnerability management, EPSS is treated as "pre-threat intel." If an organization have any intel source which something is being exploited (via their own telemetry sensors or OSINT), then they should use that as an indication of activity in the wild. For those without any evidence of exploitation or that lack threat intel, then EPSS is a great fit.

Thus, EPSS can be used for better Vulnerability Management's OSINT strategy and prioritization.


Links:

May 7, 2022

Creating Disk Image and MBR

Creating disk image or just MBR (master boot record) in Linux is common for backups, copying disks, and recovery. And the 'dd' command is an easy to use tool for making such clones.

To clone an entire hard disk:

# dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

The cmdline above set the block size to 64k (can be 128k or other value), and continue operation and ignoring all read errors. It also add input blocks with zeroes if there were any read errors, so data offsets stay in sync. Both hard disks (sda and sdb) must be the same size.


To clone a partition and make a disk image:

dd if=/dev/sdb1 of=disk_sdb1.img bs=128K conv=noerror,sync

dd if=/dev/sdb1 conv=sync,noerror bs=128K | gzip -c > disk_sdb1.img.gz

dd if=/dev/sdb1 conv=sync,noerror bs=128K status=progress | gzip -c | ssh xx@remote.ip dd of=disk_sdb1.img.gz

 

To restore system:

# gunzip -c disk_sdb1.img.gz | dd of=/dev/sdb1


To copy MBR:

# dd if=/dev/sda of=/dev/sdb bs=512 count=1

The cmdline above will copy 512 bytes (MBR) from sda to sdb disk. This only work if both disks are identically sized partitions.

# dd if=/dev/sda of=/tmp/mbrsda.bak bs=512 count=1

The cmdline above will copy 512 bytes (MBR) from sda to a disk image for 2 disks with different size partitions.


To restore the MBR to any sdb:

# dd if=/tmp/mrbsda.bak of=/dev/sdb bs=446 count=1

Master Boot Record (MBR) is the 512-byte boot sector that is the first sector of a partitioned data storage device of a hard disk.  MBR is divided into 3 sections:
1. Bootstrap - 446 bytes
2. Partition table - 64 bytes
3. Signature - 2 bytes

For restore MBR, use 446 bytes to overwrite/restore your /dev/sda MBR boot code only, and use 512 bytes to overwrite/restore your /dev/sda full MBR.


To backup and restore the primary and extended partition tables:

# sfdisk -d /dev/sda > /tmp/sda.bak

# sfdisk /dev/sda < /tmp/sda.bak

 

To backup MBR and Extended Partitions schema:

# dd if=/dev/sda of=/tmp/backup-sda.mbr bs=512 count=1

# sfdisk -d /dev/sda > /tmp/backup-sda.sfdisk

 

To restore MBR and Extended Partition schema:

# dd if=/tmp/backup-sda.mbr of=/dev/sda

# sfdisk /dev/sda < /tmp/backup-sda.sfdisk

 

Links:

  • https://www.cyberciti.biz/faq/unix-linux-dd-create-make-disk-image-commands/
  • https://www.cyberciti.biz/faq/howto-copy-mbr/

May 6, 2022

GHA Runners - Security In Action

An excellent write up, from Magno Logan, about the GitHub Actions (GHA), one of the commonly used CI tools today.

This article covers some security risks and best practices about using GHA as your primary CI tool.

About GitHub Actions (GHA)

GitHub Actions released in 2019. Working as CI tools, tt helps developers automate tasks within the software development life cycle (SDLC). One advantage of GHA is that developers do not need a separate CI tool but executes the workflow directly from GitHub. 

Actions are formed by a set of components. These are the six main components of a GHA:

  • Workflows: Automated procedure added to the repository, and is the actual Action itself
  • Events: An activity that triggers a workflow; these can be based on events such as push or pull requests, but they can also be scheduled using the crontab syntax
  • Jobs: A group of one or more steps that are executed inside a runner
  • Steps: These are tasks from a job that can be used to run commands
  • Actions: The standalone commands from the steps
  • Runners: A server that has the GHA runner application installed

 

The full article contains many more information including:

  1. GitHub Actions (GHA) and its components
  2. GitHub Action (GHA) runners
  3. Cryptomining with GitHub Actions
  4. Ubuntu Runner reconnaissance
  5. Scanning for vulnerabilities
  6. Setting up a reverse shell with Netcat and more
  7. The Mono Web Server XSP
  8. Scanning other runners
  9. Conclusions and recommendations
  10. Trend Micro solutions
GitHub Action Runners

Links:

May 5, 2022

Describe the Resources Required for Virtual Machines in Azure

AZ-900 : Resource Group

This is an excellent explanation about what is resource group in Azure within 6 min by John.

My notes:

  • Subscription-ID
    • Resource Group
      • VM, OS, Disk, Public-IP
      • VNIC, VNET and subnet, NSG, ip-config  

 

Links:

 

May 4, 2022

8 Essential Tips for Securing Networks

Here're the 8 tips I copied from Rapid7 for "emergency field security" that any defenders can take immediately. 

Given the urgency, many information security teams find themselves scrambling to prioritize mitigation actions and protect their networks. Some may not have time to make their networks less flat, patch all the vulnerabilities, set up a backup plan, encrypt all the data at rest, and practice a incident response scenarios before disaster strikes. 

With essential security, it helps identifying urgent steps to take right now.

  1. Starts prioritize those patches with CISA's KEV.
  2. Keep an eye on egress.
  3. Review Active Directory (AD) groups.
  4. Don't laugh off LOL.
  5. Don't push it.
  6. Stick to the script.
  7. Call for backup.
  8. Practice good posture.


Links

May 3, 2022

YAML Tutorial

YAML stands for “YAML Ain’t markup language” is a data serialization language which is designed to be human -friendly and works well with other programming languages for everyday tasks. 

Features:

  • YAML data is portable between programming languages
  • Includes data consistent data model
  • Easily readable by humans
  • Supports one-direction processing
  • YAML is case sensitive
  • The files should have .yaml as the extension
  • YAML does not allow the use of tabs while creating YAML files; spaces are allowed instead


Links:

May 2, 2022

Python and GitHub Secrets

This is a simple note to show how to access GitHub Secrets with Python.

First, to add a new secret, go to GitHub repository > Settings > Secrets > New Repository Secret.

Second, define a Name ('SEC_NAME') and put in the Value.

 

Next, map them as environment variables in GitHub Actions Workflow.

....

    - name: Run tests

        env:

            API_KEY: ${{ secrets.SEC_NAME }}

        run:  |

....

 

Finally, refer to the env variable in Python script.

import os

API_KEY = os.environ['API_KEY']

....



May 1, 2022

Upgrade to Ubuntu 22.04 LTS

Just upgraded to Ubuntu 22.04 today.

As a Jumpstart, I just search for some cheatsheets like, things to do after installing Ubuntu. And I also found a very good article for introducing basic terminal tips.

Overall, Ubuntu 22.04 is simple and sweet. 🤟


Links: