Get data from any page you want to get data from.
Need to talk to someone? Contact us—we’d love to help.
Get data from any page you want to get data from.
Need to talk to someone? Contact us—we’d love to help.
We are still busy preparing this batch of data. Please come back in a few minutes.
Seems like this data source was never ran before...
Changes are only available only when you have ran at least a second time.
Nope... guess no Martians around... Maybe set the webhook URL before pressing this button again...
Column 1 | Column 2 | Column 3 | Column 4 | origin_pattern | origin_url | createdAt | updatedAt | pingedAt |
---|---|---|---|---|---|---|---|---|
因为一个吧友暴食催吐致死,走时只剩20多公斤,吧就被封了,现在催吐吧的人都转移到一个叫相识于兔App上 | 好久没去看,刚看了下,帖子都被删除了,就几个帖子了。催吐也不是什么违法的事情,为什么会这样? | 因为一个吧友暴食催吐致死,走时只剩20多公斤,吧就被封了,现在催吐吧的人都转移到一个叫相识于兔App上 | https://www.zhihu.com/question/54281020 | https://www.zhihu.com/question/54281020 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 | |
因为不少人知道了催吐这事,不是对这种饮食障碍有认识,反而想要学。好多人跑去问怎么吐。一方面原来吧里人受不了,另一方面百度也不能不管吧 | https://www.zhihu.com/question/54281020 | https://www.zhihu.com/question/54281020 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 | |||
因为国内著名的吃播主播是资深吧友,殃及池 鱼呗。 | https://www.zhihu.com/question/54281020 | https://www.zhihu.com/question/54281020 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 | 2019-05-22 09:10:53 |
Sample code snippets to quickly import data set into your application
For more information on how to automatically trigger an import please reference our WebHook API guide
Integrating with Java
import java.io.BufferedReader; import java.io.InputStreamReader; import java.net.URL; import java.net.URLConnection; import java.util.Arrays; public class HelloWorld { public static void main(String[] args) { try { URL urlCSV = new URL( "https://cache.getdata.io/n66968_3a9aab0f56340718c3807eb855c67f0ceses/latest_all.csv" ); URLConnection urlConn = urlCSV.openConnection(); InputStreamReader inputCSV = new InputStreamReader( ((URLConnection) urlConn).getInputStream() ); BufferedReader br = new BufferedReader(inputCSV); String line; String[] fields; while ((line = br.readLine()) != null) { // Each row fields = line.split(","); System.out.println(Arrays.toString(fields)); } // clean up buffered reader br.close(); } catch (Exception e) { System.out.println(e.getMessage()); } } }
Integrating with NodeJs
const csv = require('csv-parser'); const https = require('https'); const fs = require('fs'); const file = fs.createWriteStream("temp_download.csv"); const request = https.get( "https://cache.getdata.io/n66968_3a9aab0f56340718c3807eb855c67f0ceses/latest_all.csv", function(response) { response.pipe(file); } ); file.on('finish', function() { file.close(); fs.createReadStream('temp_download.csv').pipe(csv()).on('data', (row) => { // Each row console.log(row); }).on('end', () => { console.log('CSV file successfully processed'); }); });
Integrating with PHP
$data = file_get_contents("https://cache.getdata.io/n66968_3a9aab0f56340718c3807eb855c67f0ceses/latest_all.csv"); $rows = explode("\n",$data); $s = array(); foreach($rows as $row) { # Each row var_dump( $row); }
Integrating with Python
import csv import urllib2 url = 'https://cache.getdata.io/n66968_3a9aab0f56340718c3807eb855c67f0ceses/latest_all.csv' response = urllib2.urlopen(url) cr = csv.reader(response) for row in cr: # Each row print row
Integrating with Ruby
require 'open-uri' require 'tempfile' require 'csv' temp_file = Tempfile.new( "getdata", :encoding => 'ascii-8bit') temp_file << open("https://cache.getdata.io/n66968_3a9aab0f56340718c3807eb855c67f0ceses/latest_all.csv").read temp_file.rewind CSV.foreach( open(uri), :headers => :first_row ).each do |row| # Each row puts row end
created on 2024-05-16
created on 2024-05-16
created on 2024-05-16