DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in 11/07/2022 on 01/06/2026. It is noted, however, that applicant has not filed a certified copy of the JP application as required by 37 CFR 1.55.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “storage unit” “acquisition unit” “output unit”, “learning unit” in claim 1, 6 - 9. Structural support for the claimed units can be found in at least Fig. 21, para’s 23, 27, 104.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101, software per se
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title.
Claim 10 rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 10, as recited, directed toward an estimation model that is a trained model. However, the specific models (physical components) are not explicitly disclosed in the specification to properly define the system sought to be protected. Such models can be interpreted as computer code, per se, and are therefore unpatentable. The claims as written are directed to unpatentable subject matter, appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1 (The Statutory Categories): Is the claim to a process, machine, manufacture, or composition of matter? MPEP 2106.03.
Per Step 1, claims 1-9 is to a device and Claim 10 is directed to a software.
Thus, the claims are directed to statutory categories of invention. However, the claims are rejected under 35 U.S.C. 101 because they are directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application.
The analysis proceeds to Step 2A Prong One.
Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon? MPEP 2106.04.
The abstract idea of claim 1;
A population output device comprising:
a storage unit that stores an estimation model that receives an input of area information related to an area and including information related to a population of the area and information related to a summation value, for each type of a map element, related to one or more map elements constituting map data of the area, and outputs population information related to a population estimated for each type of the map element of the area;
an acquisition unit that acquires the area information related to a target area that is an area to be targeted; and
an output unit that outputs the population information related to the target area, the population information being output by inputting the
area information related to the target area acquired by the acquisition unit to the estimation model stored in the storage unit.
The abstract idea of claim 8;
The population output device according to Claim 1, wherein the output unit computes a population estimated foreach map element based on the population information related to the target area and information related to the map element of the target area, and further outputs the computed population.
The abstract idea of claim 9;
The population output device according to Claim 1, further comprising: a learning unit that trains the estimation model based on the area information related to the area and information related to a population for each type of the map element of the area, wherein the storage unit stores the estimation model trained by the learning unit.
The abstract idea of claim 10;
An estimation model that is a trained model used by a population output device including an acquisition unit that acquires area information
related to an area and including information related to a population of the area and information related to a summation value, for each type of a map element, related to one or more map elements constituting map data of the area, and an output unit that outputs population information related to a population estimated for each type of the map element of the area,
wherein the estimation model is configured by a neural network that has learned a weighting coefficient based on the area information related to the area and information related to a population for each type of the map element of the area, and
the output unit outputs the population information related to a
target area that is an area to be targeted, the population information being output by inputting the area information acquired by the acquisition unit and related to the target area to the estimation model.
The abstract idea steps italicized above are those which could be performed mentally, including with pen and paper. The steps describe, at a high level, storing an estimated information, receiving, acquiring and estimating using a device. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, including learning and estimating using a device, and/or opinions, then it falls within the Mental Processes – Concepts Performed in the Human Mind grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? MPEP 2106.04.
This judicial exception is not integrated into a practical application because the additional elements are merely instructions to apply the abstract idea to a computer, as described in MPEP 2106.05(f).
Claim 1 recites the following additional elements: population output device, an acquisition unit, trained model, an output unit, estimation model, storage unit.
Claim 8 recites the following additional elements: The population output device,
output unit
Claim 9 recites the following additional elements: The population output device, a learning unit, estimation model, storage unit, estimation model trained by the learning unit.
Claim 10 recites the following additional elements: An estimation model, population output device, neural network.
These elements are merely instructions to apply the abstract idea to a computer, per MPEP 2106.05(f). Applicant has only described generic computing elements in their specification, as seen in [0051] of applicant’s specification as filed, for example.
Further, the combination of these elements is nothing more than a generic computing system applied to the tasks of the abstract idea. Because the additional elements are merely instructions to apply the abstract idea to a generic computing system, they do not integrate the abstract idea into a practical application, when viewed in combination. See MPEP 2106.05(f).
Therefore, per Step 2A Prong Two, the additional elements, alone and in combination, do not integrate the judicial exception into a practical application. The claim is directed to an abstract idea.
Step 2B (The Inventive Concept): Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.05.
Step 2B involves evaluating the additional elements to determine whether they amount to significantly more than the judicial exception itself.
The examination process involves carrying over identification of the additional element(s) in the claim from Step 2A Prong Two and carrying over conclusions from Step 2A Prong Two pertaining to MPEP 2106.05(f).
The additional elements and their analysis are therefore carried over: applicant has merely recited elements that facilitate the tasks of the abstract idea, as described in MPEP 2106.05(f).
Further, the combination of these elements is nothing more than a generic computing system. When the claim elements above are considered, alone and in combination, they do not amount to significantly more.
Therefore, per Step 2B, the additional elements, alone and in combination, are not significantly more. The claims are not patent eligible.
The analysis takes into consideration all dependent claims as well:
Dependent claims 2 – 7 contain additional steps that further narrow the abstract idea above.
Claim 2, 3, 4, 5, 6, 7 recites the following additional elements: population output device. Applicant has only described generic computing elements in their specification, as seen in {[0015 - 0018]} of applicant’s specification as filed. This does not integrate the abstract idea into practical application and/or add significantly more. The claim is ineligible. Refer to MPEP 2106.05(F).
Accordingly, claims 1-10 are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 10 are rejected under 35 U.S.C. 102 as being unpatentable over Cheng et al; [Remote Sensing and Social Sensing Data Fusion for Fine-Resolution Population Mapping with a Multimodel Neural Network], hereafter Cheng;
As per claim 10;
Cheng discloses;
An estimation model that is a trained model used by a population output device including an acquisition unit that acquires area information
{[Page 2; Related work] As geospatial data are gradually enriched, machine-learning-based methods determine weights based on specific rules related to spatial characteristics at different levels (e.g., nighttime light (NTL) intensity [26], and POIs [27], [28]). Social sensing data can be used to improve the accuracy of population spatialization [29]. Remote and social sensing data can complement each other by offering different urban surface information. These methods have produced many well-known high-resolution population datasets covering large geographic areas, including the Gridded Population of the World [30], the LandScan [31], [32], WorldPop [33], and the Global Human Settlement Population Gridded datasets [34].}
Cheng discloses;
related to an area and including information related to a population of the area and information related to a summation value, for each type of a map element, related to one or more map elements constituting map data of the area, and
{[Page 11; Qualitative population spatialization map]; The population distribution estimation model in this study is based on grid-level variables (such as impact factor values and grid-level population distribution map). However, the total population of each county controls the actual population distribution. To reduce the error of the predicted population at the county level, we use the ratio of the county-level statistical population to the predicted population to correct the pixel-level predicted population.}
Cheng discloses;
an output unit that outputs population information related to a population estimated for each type of the map element of the area,
{[Page 3; Methodology] Then, the output is the corresponding population value of the grid. Finally, we validated our method by fitting the results to township-level census data. The model results were compared with the published gridded population dataset (WorldPop) for accuracy verification and analysis.}
Cheng discloses;
wherein the estimation model is configured by a neural network that has learned a weighting coefficient based on the area information related to the area and information related to a population for each type of the map element of the area, and
{[Page 2]; To address the problem of fine-resolution population distribution based on multisource data, we designed a schema for population spatialization that includes multisource data processing, spatial data representation, a multimodel neural network for population estimation, and model verification methods. Specifically, the proposed approach uses a first-order space matrix of a geographic unit to represent local spatial information and to construct local spatial features of the units in question. The proposed multimodel neural network, which combines a CNN and a multilayer perceptron (MLP) model, takes account of both local and global spatial information by integrating multisource data. The CNN-based model extracts spatial dependence features from the first-order adjacency matrix, and the MLP-based model estimates a fine-resolution population mapping at a 100-m spatial resolution.}
Cheng discloses;
the output unit outputs the population information related to a target area that is an area to be targeted, the population information being output
{[Page 3; Methodology] Then, the output is the corresponding population value of the grid. Finally, we validated our method by fitting the results to township-level census data. The model results were compared with the published gridded population dataset (WorldPop) for accuracy verification and analysis.}
Cheng discloses;
by inputting the area information acquired by the acquisition unit and related to the target area to the estimation model.
{[Page 2; introduction] After multisource data processing, the data of the two representations are inputted to the multimodel neural network, using the CNN-based and the MLP-based models to extract significant features. Those features are then fused, and the outputs are assigned to fully connected (FC) layers and the regression layer for predictive purposes.}
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3-9 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al; [Remote Sensing and Social Sensing Data Fusion for Fine-Resolution Population Mapping with a Multimodel Neural Network], hereafter Cheng; in view of Wardrop et al; [
Spatially disaggregated population estimates in the absence of national population and housing census data], hereafter Wardrop; in further view of Rongyong et al, [CN 110991225 A], hereafter Rongyong.
As per claim 1;
Cheng discloses;
A population output device comprising:
a storage unit that stores an estimation model that receives an
{[Page 2] To address the problem of fine-resolution population distribution based on multisource data, we designed a schema for population spatialization that includes multisource data processing, spatial data representation, a multimodel neural network for population estimation, and model verification methods. Specifically, the proposed approach uses a first-order space matrix of a geographic unit to represent local spatial information and to construct local spatial features of the units in question. The proposed multimodel neural network, which combines a CNN and a multilayer perceptron (MLP) model, takes account of both local and global spatial information by integrating multisource data. The CNN-based model extracts spatial dependence features from the first-order adjacency matrix, and the MLP-based model estimates a fine-resolution population mapping at a 100-m spatial resolution.}
Cheng discloses;
outputs population information related to a population estimated for each type of the map element of the area;
{[Page 3; Methodology] Then, the output is the corresponding population value of the grid. Finally, we validated our method by fitting the results to township-level census data. The model results were compared with the published gridded population dataset (WorldPop) for accuracy verification and analysis.}
Cheng discloses;
an acquisition unit that acquires the area information related to a target area that is an area to be targeted; and
{[Page 2; Related work] As geospatial data are gradually enriched, machine-learning-based methods determine weights based on specific rules related to spatial characteristics at different levels (e.g., nighttime light (NTL) intensity [26], and POIs [27], [28]). Social sensing data can be used to improve the accuracy of population spatialization [29]. Remote and social sensing data can complement each other by offering different urban surface information. These methods have produced many well-known high-resolution population datasets covering large geographic areas, including the Gridded Population of the World [30], the LandScan [31], [32], WorldPop [33], and the Global Human Settlement Population Gridded datasets [34].}
Cheng discloses
an output unit that outputs the population information related to the target area, the population information being output by inputting the
{[Page 2; Section 1] The CNN-based model extracts spatial dependence features from the first-order adjacency matrix, and the MLP-based model estimates a fine-resolution population mapping at a 100-m spatial resolution. After multisource data processing, the data of the two representations are inputted to the multimodel neural network, using the CNN-based and the MLP-based models to extract significant features. Those features are then fused, and the outputs are assigned to fully connected (FC) layers and the regression layer for predictive purposes.}
Cheng does not explicitly disclose inputting the area information; however; Wardrop discloses;
input of area information related to an area and including information related to a population of the area and information related to a summation value, for each type of a map element, related to one or more map elements constituting map data of the area, and
{[Page 4; Micro-census surveys]; Population data for a sample of areas across the area or country of interest are needed as a primary input to bottom-up population estimation. These data may come from a partial census, census-like population survey (i.e., where a survey is designed to provide population counts), or a specifically designed micro census survey.
[ Page 5; Covariates]; The mapping of human settlements and even individual buildings from a new generation of satellite imagery and aerial photography is providing detailed geospatial data on human settlement patterns, a key input (as settlement areas or dwelling counts within specified areas) for bottom-up population estimation (12, 43).]}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine/modify/adjust the Cheng’s population output device and estimation model to include Wardrop et al’s input of area information related to an area and including information related to a population of the area and information related to a summation value, for each type of a map element, related to one or more map elements constituting map data of the area, since Cheng teaches the population output device which can be identified (See Cheng [page 2]). The combination would have been obvious because a person of ordinary skill in the art since the population output device is configured to output population information, input of population data across the area of interest are acquired as a primary input to bottom-up population estimation. See Wardrop (Page 4 and Page 5).
Cheng does not explicitly disclose the actual area information, however Rongyong discloses;
area information related to the target area acquired by the acquisition unit to the estimation model stored in the storage unit.
{[page 4] The crowd density graph output by the four-column convolutional neural network model can visually reflect the space distribution of the crowd, the number of the crowd in a given grid area can be calculated by an integral method, and the crowd density value is obtained by combining the size of an actual area.}
Motivation: It would have been obvious to a person pf ordinary skill in the art, before the effective filing date of the claimed invention, to combine/modify/adjust the Cheng’s population output device and estimation model to include Rongyong et al’s area information related to the target area acquired by the acquisition unit to the estimation model stored in the storage unit, since Cheng teaches the population output device which can be identified (See Cheng [page 2]). The combination would have been obvious because a person of ordinary skill in the art since the population output device is configured to output population information, the population in a given grid area can be calculated by an integral method, and the crowd density value is obtained by combining the size of an actual area to enable a population estimated value. See Rongyong (Page 4).
As per claim 3;
Cheng discloses;
The population output device according to The population output device according to wherein the type of the map element includes at least one of a facility, a park, a station, a house, an office, a restaurant, an event venue, a lake, a river, a mountain, a road, or a railroad.
{[ Page 8; Infrastructure data] POIs capture geographic location attributes that indicate population concentration, including information such as the name, category, and location of geographic objects. POIs represent people's understanding of the functions and attributes of a specific place, and it is an important source of social sensing data [54]. Therefore, the social sensing data used in this article mainly refer to POI data.}
[Page 8] The POI information here mainly includes education service (including school and educational institutions), health and social service (including hospital and clinics), commercial building (including companies and office buildings), commercial facilities and services (including shopping malls, shops, and banking institutions), parks, government agency, railway station (includes bus stations and subway stations), hotel service (including hotel buildings), restaurant service, and residential community.}
As per claim 4;
Cheng discloses;
The population output device according to The population output device according to wherein the area information further includes environmental data related to an environment.
{[page 10] This confirms that population distribution data with a spatial resolution of 100 m can facilitate comprehensive and effective management of population, resources, environment, and social economics, with important practical and theoretical implications for refined urban management.}
As per claim 5;
Cheng does not explicitly disclose the settlement mapping, however; Wardrop discloses;
The population output device according to The population output device according to wherein the environment includes at least one of a timing at which the population is measured or weather of the area at the timing.
{[ Page 2; The need for high spatial-resolution population data]; This highlights the important aspect of within-country heterogeneity, whereby aggregated data may hide significant subnational disparities. However, assessing progress against Sustainable Development Goal indicators relies on the availability of standardized and robust data, including a reliable baseline population estimate from which to measure change (16). Given the importance of regional heterogeneity in population characteristics, the United Nations has explicitly called for improved availability of high-quality, timely, and reliable data disaggregated by income, gender, age, race, ethnicity, migratory status, disability, geographic location, and other characteristics relevant to national contexts. This will be vital to ensure subnational variation in indicators is adequately captured (17).}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine/modify/adjust the Cheng’s population output device and estimation model to include Wardrop et al’s wherein the environment includes at least one of a timing at which the population is measured or weather of the area at the timing, since Cheng teaches the population output device which can be identified (See Cheng [page 2]). The combination would have been obvious because a person of ordinary skill in the art since the population output device is configured to output population information, an inclusion of a reliable population measurement to enable a population estimate. See Wardrop (Page 2).
As per claim 6;
Cheng discloses;
The population output device according to The population output device according to wherein the population information is a population ratio estimated for each type of the map element, and the output unit computes a population estimated for each type of the map element of the target area based on the population information related to the target area and a population of the target area, and further outputs the computed population.
{[page 6] However, the total population of each county controls the actual population distribution. To reduce the error of the predicted population at the county level, we use the ratio of the county-level statistical population to the predicted population to correct the pixel-level predicted population.}
As per claim 7;
Cheng discloses;
The population output device according to The population output device according to wherein the output unit computes a population estimated for each map element based on the computed estimated population for each type of the map element of the target area and information related to the map element, and further outputs the computed population.
{[Page 3; Methodology]; The geographic attributes of each grid geographic unit comprise eight factors. Those factors were then inputted to the population spatialization models as an independent variable for training and the WorldPop dataset as the training label. The spatial representation of each geographic unit is used as input to the training model. Then, the output is the corresponding population value of the grid. Finally, we validated our method by fitting the results to township-level census data. The model results were compared with the published gridded population dataset (WorldPop) for accuracy verification and analysis.
[Page 8; Remote sensing data]; The area proportion of each type in each county level was calculated as the land cover factor. The extracted data were resampled to 100-m grids. The proportion of different land types in each grid was calculated as the land cover factor for population estimation, prepared for the following experimental section.
[Page 9]; Overall, the eight predictors for each grid cell were used to estimate the gridded population. The number of predictors was subsequently optimized by selecting the most important predictors from the list.}
As per claim 8;
Cheng does not explicitly disclose the population computation, however, Rongyong discloses;
The population output device according to Claim 1, wherein the output unit computes a population estimated for each map element based on the population information related to the target area and information related to the map element of the target area, and further outputs the computed population.
{[Page 3; Example 1] The embodiment adopts a four-column convolutional neural network model, the structure of which is shown in fig. 1, and includes four parallel convolutional neural networks with the same structure, the convolutional kernels of the convolutional neural networks are different in size and are respectively 7 × 7, 5 × 5, 3 × 3 and 1 × 1, the output of each convolutional neural network is mapped through a 1 × 1 filter to generate the two-dimensional density map matrix, wherein the position of each human head is represented by a small array of n × n (n can be defined as 15), and the value of the small array satisfies gaussian distribution. When the people in the crowd picture are calibrated, the position coordinates of the head of the people are obtained. To further represent a person with a bright spot in the crowd density map, the coordinate needs to be gaussian distributed, so that the coordinate becomes a small matrix in the two-dimensional density map matrix, which is a bright spot in the graphical display, and the bright spot is represented by a 15 × 15 pixel array in this embodiment.}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine/modify/adjust the Cheng’s population output device and estimation model to include Rongyong et al’s output unit computes a population estimated for each map element based on the population information related to the target area and information related to the map element of the target area (See Cheng [page 2]). The combination would have been obvious because a person of ordinary skill in the art since the population output device is configured to output population information, the population in a given grid area can be calculated by an integral method, and the crowd density value is obtained by combining the size of an actual area to enable a population estimated value using a convolutional neural network. See Rongyong (Page 3).
As per claim 9;
Cheng discloses;
The population output device according to Claim 1, further comprising: a learning unit that trains the estimation model based on the area information related to the area and information related to a population for each type of the map element of the area, wherein the storage unit stores the estimation model trained by the learning unit.
{[page 3] The geographic attributes of each grid geographic unit comprise eight factors. Those factors were then inputted to the population spatialization models as an independent variable for training and the WorldPop dataset as the training label. The spatial representation of each geographic unit is used as input to the training model. Then, the output is the corresponding population value of the grid. Finally, we validated our method by fitting the results to township-level census data. The model results were compared with the published gridded population dataset (WorldPop) for accuracy verification and analysis.}
Claim(s) 2 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et al; in view of Gervasoni et al; [Convolutional Neural Networks for Disaggregated Population Mapping Using Open Data], hereafter Gervasoni.
As per claim 2;
Gervasoni discloses;
The population output device according to The population output device according to wherein a summation target of the summation value related to the map element includes at least one of the number of the map elements, an area of a polygon indicating the map element, or a length of a link indicating the map element.
{[Page 3; urban features processing] The OSM database consists of geometrical elements (i.e., points, lines, polygons) with associated metadata - key-value tags - to indicate information about these elements.]}
[Page 3;] Tags information] Buildings, building parts and POIs are classified according to their input tags. We follow in this process the information provided by the OSM wiki5. Afterwards, the land use is inferred for those buildings which do not contain this tag via the inclusion of other information, i.e. polygons with defined land use. This procedure is the same as done in [25].}
Motivation: It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to combine/modify/adjust the Cheng’s population output device and estimation model to include Gervasoni et al’s wherein a summation target of the summation value related to the map element includes at least one of the number of the map elements, an area of a polygon indicating the map element, or a length of a link indicating the map element, since Cheng teaches the population output device which can be identified (See Cheng [page 2]). The combination would have been obvious because a person of ordinary skill in the art since the population output device is configured to output population information, the inclusion of an area of a polygon indicating the map element, to indicate the map elements. See Gervasoni (page 3).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Mikhno et al (US20220039755A1); An optimized population model that estimates blood glucose values for a population of users is generated by mapping received data for the population of users over a time period to a sequence of estimated blood glucose values for the population of users over the time period. Discrete blood glucose measurement data for each user, user activity data for each user, and other contextual data for each user can be processed via a supervised machine learning model to learn a transfer function for a population model that estimates blood glucose values for the population of users. One or more parameters of the learning model can be adjusted to generate the optimized population model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR CHIGOZIRIM ESONU whose telephone number is (571)272-4883. The examiner can normally be reached Monday - Friday 9:00 am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SARAH MONFELDT can be reached on (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, vis it: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTOR CHIGOZIRIM ESONU
Examiner, Art Unit 3629
/SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629