jeudi 29 octobre 2015

What client-side technologies can I use to retrieve web page content from url bypassing same-origin policy

My goal is to extend user browser experience, by adding some functionalities to specific pages (which url match a regular expression) on a website, possibly without reinventing the wheel (i.e. creating a browser && website specific plugin ).

Whenever the user accesses a matched page (main page), the code, would retrieve from within that page (i.e. the main page) a set of url's (secondary pages). The code would then load the html from each url (i.e. the plain html returned by the server secondary pages) and retrieve some key components using regular expressions. Finally, the code, would modify the html of the main page by adding to it, the information retrieved from the secondary pages

In short, example.com/e.html contains a set of urls: a.com/a.html and b.com/b.html, a.html contains the string "Hello" and b.html contains the string "World !". When the users view e.html, instead of seeing 2 links, they see the string "Hello World!". I do not own example.com therefore I cannot use server sided languages.

I thought of JavaScript + Greasemonkey but I cannot seem to be able circumvent the same-origin policy.

The question is: Which client-sided language would you use to implement the above ?




Aucun commentaire:

Enregistrer un commentaire