• Market Insights
  • Discover Data
  • Integrations
  • API Documentation
    • Documentation Overview
    • Quick start for getting Data
    • Quick start for exporting Data
    • Semantic Query Language
    • Data Source Management API
    • WebHooks API
    • Sentiment Analysis
  • Sign In
  • Sign Up for Free

Sign In

Sign in with Google
Sign in with LinkedIn
Sign in with GitHub
Forgot your password?
Sign up For Free

3 simple steps to get you started

Step 1. Watching this 55 seconds tutorial

Step 2. Get

Your Chrome Extension

Start your 55 seconds tutorial.

Step 3. Get data in a few clicks

Get data from any page you want to get data from.

Need more help?

Need to talk to someone? Contact us—we’d love to help.

3 simple steps to get you started

Step 1. Watching this 55 seconds tutorial

Step 2. Get

Your Chrome Extension

Start your 55 seconds tutorial.

Step 3. Get data in a few clicks

Get data from any page you want to get data from.

Need more help?

Need to talk to someone? Contact us—we’d love to help.

Opps...

We are still busy preparing this batch of data. Please come back in a few minutes.

Hmmm...

Seems like this data source was never ran before...

No changes detected

Changes are only available only when you have ran at least a second time.

Earth calling Mars...?

Nope... guess no Martians around... Maybe set the webhook URL before pressing this button again...

Server response
    

booksetc.co.uk
Omni Scraper New

By
George Morcos
Use for Free
  • Data Set Preview
  • Settings
  • Recipe
  • Collaborators
  • Sample Code
Harvested on
--
--
--
CSV JSON HTML Changes

Seems we are preparing your data for download.

While you are waiting, check out the 48,803 public data sources
contributed by the rest of our community

Check them out

Data source unique ID
n93189_eef03483b2f0785289b053806873d847eses
Privacy
Public
Last ran status
COMPLETED
Last ran
2021-10-15 13:28:55 UTC
Crawl Frequency
Not scheduled
Urls to Monitor
Provide your own list of URLs you want to monitor
https://www.booksetc.co.uk/books/view/-9781302928254
https://www.booksetc.co.uk/books/view/-9781779509772
https://www.booksetc.co.uk/books/view/-9781302930417
https://www.booksetc.co.uk/books/view/-9781302930424
https://www.booksetc.co.uk/books/view/-9781302929633
https://www.booksetc.co.uk/books/view/-9781302930592
https://www.booksetc.co.uk/books/view/-9781302928650
https://www.booksetc.co.uk/books/view/-9781302928155
https://www.booksetc.co.uk/books/view/-9781302928391
https://www.booksetc.co.uk/books/view/-9781302929657
https://www.booksetc.co.uk/books/view/-9781302931391
https://www.booksetc.co.uk/books/view/cloak-and-dagger-omnibus-vol-2-9781302930677
https://www.booksetc.co.uk/books/view/x-men-inferno-prologue-omnibus-9781302931360
https://www.booksetc.co.uk/books/view/aliens-the-original-years-omnibus-vol-2-9781302928902

Sample code snippets to quickly import data set into your application

For more information on how to automatically trigger an import please reference our WebHook API guide


Integrating with Java

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.net.URLConnection;
import java.util.Arrays;

public class HelloWorld {
  public static void main(String[] args) {

    try {
      URL urlCSV = new URL(
        "https://cache.getdata.io/n93189_eef03483b2f0785289b053806873d847eses/latest_all.csv"
      );

      URLConnection urlConn = urlCSV.openConnection();
      InputStreamReader inputCSV = new InputStreamReader(
        ((URLConnection) urlConn).getInputStream()
      );
      BufferedReader br = new BufferedReader(inputCSV);

      String line;
      String[] fields;
      while ((line = br.readLine()) != null) {
        // Each row
        fields = line.split(",");
        System.out.println(Arrays.toString(fields));

      }
      // clean up buffered reader
      br.close();


    } catch (Exception e) {
      System.out.println(e.getMessage());
    }
  }   
}


Integrating with NodeJs

const csv     = require('csv-parser');
const https   = require('https');
const fs      = require('fs');

const file = fs.createWriteStream("temp_download.csv");
const request = https.get(
  "https://cache.getdata.io/n93189_eef03483b2f0785289b053806873d847eses/latest_all.csv", 
  function(response) {
    response.pipe(file);
  }
);

file.on('finish', function() {
  file.close();
  fs.createReadStream('temp_download.csv').pipe(csv()).on('data', (row) => {
    // Each row
    console.log(row);

  }).on('end', () => {
    console.log('CSV file successfully processed');

  });
});



Integrating with PHP

$data = file_get_contents("https://cache.getdata.io/n93189_eef03483b2f0785289b053806873d847eses/latest_all.csv");
$rows = explode("\n",$data);
$s = array();
foreach($rows as $row) {

  # Each row
  var_dump( $row);
  
}


Integrating with Python

import csv
import urllib2

url = 'https://cache.getdata.io/n93189_eef03483b2f0785289b053806873d847eses/latest_all.csv'
response = urllib2.urlopen(url)
cr = csv.reader(response)

for row in cr:
  # Each row
  print row


Integrating with Ruby

require 'open-uri'
require 'tempfile'
require 'csv'

temp_file = Tempfile.new( "getdata", :encoding => 'ascii-8bit')
temp_file << open("https://cache.getdata.io/n93189_eef03483b2f0785289b053806873d847eses/latest_all.csv").read
temp_file.rewind

CSV.foreach( open(uri), :headers => :first_row ).each do |row|      
  # Each row
  puts row
end

George Morcos owner

Categories

Products

Related Data Sources

nefisyemektarifleri.com

Nefis Yemek Tarifleri — Kolay ve Pratik Yemek Tarifleri - Copy

Products

Tuğba Kıraç
1

product pricing data - daily first page google search results - Copy

Products

Xiaoming Chen
1

product pricing data - daily first page google search results - Copy

Products

Xiaoming Chen

Günün Fırsatı ve Teklifi & Kısa Süre İndirimde Kalan Ürünler - Copy

Products

Ali Ak
1
More related data sources