Blog

  • sftp chroot

    # cat /etc/ssh/sshd_config  | grep -vE '^#|^$'
    AuthorizedKeysCommand /usr/bin/google_authorized_keys
    AuthorizedKeysCommandUser root
    HostKey /etc/ssh/ssh_host_rsa_key
    HostKey /etc/ssh/ssh_host_ecdsa_key
    HostKey /etc/ssh/ssh_host_ed25519_key
    SyslogFacility AUTHPRIV
    PermitRootLogin no
    AuthorizedKeysFile      .ssh/authorized_keys
    PasswordAuthentication no
    ChallengeResponseAuthentication no
    GSSAPIAuthentication yes
    GSSAPICleanupCredentials no
    UsePAM yes
    X11Forwarding yes
    ClientAliveInterval 420
    UseDNS no
    AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
    AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
    AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
    AcceptEnv XMODIFIERS
    Subsystem sftp internal-sftp
    Match User user1
       ChrootDirectory /data/chroot/%u
       ForceCommand internal-sftp
       AllowTcpForwarding no
       X11Forwarding no
    

    https://serverfault.com/a/656756

  • 2021读书清单

    目标30本书,单线程模式,切记不要并发,不要同时读多本书

    月份 技术书 非技术书
    2021-01-03 乡土中国
    2021-02-03 超越你的大脑
    2021-03-03 小家越住越大 1 2 3
    2021-05-03 原生家庭
    2021-08-16 Terraform: Up & Running
    2021-09-01 吃掉那只青蛙
    2021-09-10 十年一觉电影梦
    2021-11-10 李诞脱口秀工作手册
    2021-11-20 夜晚的潜水艇

     

  • alpine apk

    apk add bash curl tcpdump bind-tools busybox-extras

  • k8s pod OOMKill Exit Code: 137

    Identify it is OOMKill

    Reason should be OOMKill and the time is Finished

    kubectl get pods testapp-v092-p8czf -o yaml | less -i


    Last State: Terminated
    Reason: OOMKilled
    Exit Code: 137
    Started: Fri, 11 Sep 2020 11:00:08 +0800
    Finished: Mon, 14 Sep 2020 13:00:46 +0800

    OOM heap dump ( when oomkill happen )

    Container entrypoints add java start params

    -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\.hprof

    2. Mount an emptyDir for the pod In the pod lifecycle, /var/log/dump is durable.

    3. Compress the dumpfile and download

    gzip heapdump2020-09-15-03-198523874477783269974.hprof
    kubectl cp testapp-v1127-xnbhq:/var/log/dump/heapdump2020-09-15-03-198523874477783269974.hprof.gz /tmp/heapdump2020-09-15-03-198523874477783269974.hprof.gz

    Check List ( pod is already restarted )

    check stackdirver applictaion logs

    check memory and cpu limits

    $ kubectl get pods testapp-v203-trsfl -o yaml

    resources:
    limits:
    cpu: 1500m
    memory: 1229Mi
    requests:
    cpu: 300m
    memory: 1Gi

    check kubectl top status

    $ kubectl top pod testapp-v203-trsfl –containers
    POD NAME CPU(cores) MEMORY(bytes)
    testapp-v203-trsfl testapp 13m 1144Mi
    testapp-v203-trsfl istio-proxy 5m 47Mi

    new relic pod memory:

    commands investigate java stack heap (inside pod)

    apk add –no-cache jattach –repository http://dl-cdn.alpinelinux.org/alpine/edge/community/
    jattach pid inspectheap
    jattach pid jcmd VM.info

    ps find RSS of process (inside pod)

    $ kubectl exec -it testapp-v203-trsfl /bin/bash
    ps -o pid,user,vsz,rss,comm,args
    PID USER VSZ RSS COMMAND COMMAND
    1 root 4332 720 tini /tini — /entrypoint.sh java
    7 test 6.3g 1.1g java java -XX:+UseG1GC -Xms768m -Xmx768m -DREGION=gcp_hk -XX:+ExitOnOutOfMemoryError -XX:+UseStringDeduplication -XX:StringDeduplicationAgeThreshold=3 -agentlib:jdwp=transport=dt_socket,ser
    18215 root 2620 2316 bash /bin/bash
    18267 root 1572 20 ps ps -o pid,user,vsz,rss,comm,args

    Issues:

  • newrelic 和 opsgenie 集成

    NewRelic

    policy
    channel => opsgenine Teams foobar

    Opsgenie
    integration
    teams foobar

  • debug istio multicluster

    curl -X POST http://localhost:15000/logging?level=debug
    

    Check config

    bin/istioctl proxy-config listener  istio-ingressgateway-6589659c8c-f76f9 --port 15443 -o json -n istio-system