Get data from any page you want to get data from.
Need to talk to someone? Contact us—we’d love to help.
Get data from any page you want to get data from.
Need to talk to someone? Contact us—we’d love to help.
We are still busy preparing this batch of data. Please come back in a few minutes.
Seems like this data source was never ran before...
Changes are only available only when you have ran at least a second time.
Nope... guess no Martians around... Maybe set the webhook URL before pressing this button again...
Column 1 | Column 2 | Column 3 | Column 4 | Column 5 | 题目 | 下一页 | Column 8 | origin_pattern | origin_url | createdAt | updatedAt | pingedAt |
---|---|---|---|---|---|---|---|---|---|---|---|---|
国家支持网络运营者之间在网络安全信息() 、()、()和()等方面进行合作,提高网 络运营者的安全保障能力 | 有关行业组织建立健全本行业的网络安全保护 规范和()机制,加强对网络安全风险的分析 评估,定期向会员进行风险警示,()协助会 员应对网络安全风险 | 顺序练习 | 网络相关行业组织按照章程,(),制定网络安全 行为规范,指导会员加强网络安全保护,提高网 络安全保护水平,促进行业健康发展 | https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC | ||||
有关行业组织建立健全本行业的网络安全保护 规范和()机制,加强对网络安全风险的分析 评估,定期向会员进行风险警示,()协助会 员应对网络安全风险 | https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC | |||||||
https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | https://www.kaoshibao.com/online/paper/detail/?paperid=8026969 | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC | 2023-03-20 08:53:46 UTC |
Sample code snippets to quickly import data set into your application
For more information on how to automatically trigger an import please reference our WebHook API guide
Integrating with Java
import java.io.BufferedReader; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; import java.util.Arrays; public class HelloWorld { public static void main(String[] args) { try { URL urlCSV = new URL( "https://cache.getdata.io/n118317_277ac06f33abe1e73e6cf1cf7a3faa46eses/latest_all.csv" ); URLConnection urlConn = urlCSV.openConnection(); InputStreamReader inputCSV = new InputStreamReader( ((URLConnection) urlConn).getInputStream() ); BufferedReader br = new BufferedReader(inputCSV); String line; String[] fields; while ((line = br.readLine()) != null) { // Each row fields = line.split(","); System.out.println(Arrays.toString(fields)); } // clean up buffered reader br.close(); } catch (Exception e) { System.out.println(e.getMessage()); } } }
Integrating with NodeJs
const csv = require('csv-parser'); const https = require('https'); const fs = require('fs'); const file = fs.createWriteStream("temp_download.csv"); const request = https.get( "https://cache.getdata.io/n118317_277ac06f33abe1e73e6cf1cf7a3faa46eses/latest_all.csv", function(response) { response.pipe(file); } ); file.on('finish', function() { file.close(); fs.createReadStream('temp_download.csv').pipe(csv()).on('data', (row) => { // Each row console.log(row); }).on('end', () => { console.log('CSV file successfully processed'); }); });
Integrating with PHP
$data = file_get_contents("https://cache.getdata.io/n118317_277ac06f33abe1e73e6cf1cf7a3faa46eses/latest_all.csv"); $rows = explode("\n",$data); $s = array(); foreach($rows as $row) { # Each row var_dump( $row); }
Integrating with Python
import csv import urllib2 url = 'https://cache.getdata.io/n118317_277ac06f33abe1e73e6cf1cf7a3faa46eses/latest_all.csv' response = urllib2.urlopen(url) cr = csv.reader(response) for row in cr: # Each row print row
Integrating with Ruby
require 'open-uri' require 'tempfile' require 'csv' temp_file = Tempfile.new( "getdata", :encoding => 'ascii-8bit') temp_file << open("https://cache.getdata.io/n118317_277ac06f33abe1e73e6cf1cf7a3faa46eses/latest_all.csv").read temp_file.rewind CSV.foreach( open(uri), :headers => :first_row ).each do |row| # Each row puts row end
created on 2025-07-14
created on 2025-07-11
created on 2025-07-11