My requirement is to pull data from an external api, first call returns only 100 records, but with a header information stating total pages and total records in remote database. I need to pull all this records at once and insert into my database, next call to the api should only pull new records in the remote database.
I am working with ASP.NET Core 3.0 and a SQL Server database.
public void GetReservation(int? pageNumber, int? pageSize)
{
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri("https://www.sitename.com/url");
MediaTypeWithQualityHeaderValue contentType = new
MediaTypeWithQualityHeaderValue("application/json");
client.DefaultRequestHeaders.Accept.Add(contentType);
HttpResponseMessage response = client.GetAsync($"/api/serviceurl?pageNumber=
{pageNumber}&pageSize={pageSize}").Result;
string stringData = response.Content.ReadAsStringAsync().Result;
List<Reservations> data = JsonConvert.DeserializeObject<List<Reservations>>(stringData);
var headerInfo = response.Headers.GetValues("X-Pagination").First();
XPaginationObject obj= JsonConvert.DeserializeObject<XPagination>(headerInfo);
// Insert into database code only enters first 100 page
}
}
headerinfo
contains totalpages, totalrecords, currentpage, hasNext, hasprevious...
2条答案
按热度按时间u4dcyp6a1#
It looks to me you're almost there: just run this method in a loop until you have all the records.. But first you need to get the total number of pages.
What I would do is:
1: call the api with pagenumber 1 and size 0 so you receive a header. 2: Get the info from the header and loop over pages until you are done. 3. You will have to write your own logic for only getting the new reservations, for instance, store the last received page and record number so you can skip these the next time.
Does this answer your question?
P.S.: It could very well be possible that your data provider only allows getting 100 rows at a time. If this is the case you will have to loop over 100-record pages until you have received all the pages.
iih3973s2#
Upon the initial call, you'll obtain the total number of pages. Subsequently, you'll need to loop through each page to begin collecting the data. A straightforward way to accomplish this is to execute a loop and gather the data within each iteration. For a more efficient approach, you could fetch each page, saving its raw response along with the page number. This way, each page will have its own entry comprised of the page number, total number of pages, and the raw JSON response.
To streamline this process further, consider employing a queue (e.g., an SQS queue). Push the unique identifier for each page to the queue. Then, a worker can retrieve the page from the queue, enabling parallel processing of the page data.