DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/20/2024 and 04/15/2025 is/are compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Office Action Summary
Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishimura et all (Method for detecting degree of submergence in flood disaster using drone) in view of Chaudhary et al (Flood-Water Level Estimation From Social Media Images).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishimura et all (Method for detecting degree of submergence in flood disaster using drone) in view of Chaudhary et al (Flood-Water Level Estimation From Social Media Images).
Regarding claim(s) 1, 6, and 7, Nishimura teaches a water inundation depth determination device comprising:
a memory (Page 52, Left Col., 1st Paragraph: “For model learning we used Intel (R) Xeon (R} @2.20 GHz CPU and Google Collaboratory P100 PCIE-16GB GPU with 16GB memory”); and
a processor coupled to the memory (Page 52, Left Col., 1st Paragraph: “For model learning we used Intel (R) Xeon (R} @2.20 GHz CPU and Google Collaboratory P100 PCIE-16GB GPU with 16GB memory”), the processor being configured to perform processing comprising:
detecting a first type of a target and a first submersion position of the target included in a first captured image (Figure 3; Figure 4; Page 54, Right Col., 1st Paragraph: “we defined a method to detect submerged houses and cars from aerial images of flood disaster sites available from drones, and to determine their degree of submergence”; and Page 49, Right Col., 1st Paragraph – 2nd Paragraph: “As shown in Fig. 3, we define the criteria for determining the degree of submergence using the standard sizes of houses and vehicles as indicators […] estimate the degree of submergence of a shielded object on the assumption that (1) houses and cars with the same roof level are at the same elevation, and { 2) houses and cars with the same elevation are at the same submergence”); and
outputting a first submersion depth corresponding to the first type and the first submersion position (Figure 3; Figure 10; Table 5; Page 49, Right Col., 1st Paragraph – 2nd Paragraph; and Page 53, Right Col., 2nd Paragraph: “In the submergence estimation […] Table 5 shows the number of objects in each class and water level. In the submergence estimation, 80% of the 627 object images are used for learning and the remaining 20% are used for testing”)
Nishimura fails to teach Chaudhary teaches an outputting a first submersion depth corresponding to the first type and the first submersion position by referring to depth information in which a depth is associated with a pair of a type and a submersion position (Table 1; Figure 2; Figure 4; Figure 5; Page 8, Left Col., 2nd Paragraph: “To map level classes to actual flood height, we consider an average height human body and derive the water height in cm, see Table 1 […] we compare them with the average human height, on which the 11 flood levels are defined, and extend the flood level definition to these other objects”).
Nishimura discloses determining a degree of submergence based on captured images by detecting a type of a target, such as a house or a vehicle, included in an aerial image captured by a drone, and determining the submergence condition of the target based on reference positions of the target (e.g., windows, floors, or tires), thereby outputting a submergence degree corresponding to the detected target. However, Nishimura does not explicitly describe referring to depth information in which a depth is associated with a pair of a target type and a submersion position. Chaudhary teaches estimating flood water level by assessing how much detected objects appearing in an image are submerged in water and explicitly mapping object dependent flood level classes to actual water depth values using predefined depth information, such as a level to centimeter relation table that associates object type and submersion level with a corresponding depth.
It would have been obvious to a person of ordinary skill in the art at the time of the invention to modify the submergence determination of Nishimura by implementing the object-based flood level estimation and depth association taught by Chaudhary in order to quantitatively output a submersion depth corresponding to the detected target type and submersion position, as such a modification merely applies a known technique for associating object-dependent submersion levels with actual depth values (Table 1) to improve the accuracy and usefulness of the submergence determination, yielding predictable results. This motivation for the combination of Nishimura and Chaudhary is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 2, Nishimura as modified by Chaudhary teaches the non-transitory computer-readable recording medium according to claim 1, where Chaudhary teaches wherein the detecting of the first type includes detecting the first type by inputting the first captured image to a first machine learning model generated by machine learning using the captured image as a feature amount and using a correct label as a type (Figure 2; Table 1; Page 6, Right Col., 3rd Paragraph: “The Region proposal network (RPN) is a neural network which scans over the image and gives scores based on whether there is an object or not in the scanned regions”; and Chapter 4.1. Annotation strategy: “the goal of this study is to quantify flood-water level based on objects partially submerged in water, the first step for defining the annotation strategy is to decide which objects we should consider for the classification task […] Based on the criteria we decided to consider these five classes of objects: Person, Car, Bus, Bicycle, and House […] we also consider the flood class, which represents flood-water present in the image […] For each of these objects appearing in the dataset images we further define a bounding box containing the object and a segmentation mask which highlights the object […]”).
Regarding claim(s) 3, Nishimura as modified by Chaudhary teaches the non-transitory computer-readable recording medium according to claim 1, where Chaudhary teaches wherein the detecting of the first water inundation position includes detecting the first water inundation position by inputting the target included in the first captured image to a second machine learning model generated by machine learning using the target included in the captured image as a feature amount and the water inundation position as a correct label (Figure 4; Figure 5; Abstract: “one possible way to estimate the flood level consists of assessing how much the objects appearing in the image are submerged in water”; Page 7, Left Col., 3rd Paragraph: “Flood Level: We predict the flood level class of the proposal; and Page 8, Left Col., 2nd Paragraph: “We consider 11 flood levels, levels go from 0, which means no water, to 10, which represents a human body of average height completely submerged in water. Moreover, since in order to create the training dataset, we need to annotate manually the images with water level information”).
Regarding claim(s) 4, Nishimura as modified by Chaudhary teaches the non-transitory computer-readable recording medium according to claim 1, where Chaudhary teaches wherein the detecting of the first type includes a process of detecting at least one of an upright person, a squatting person, a sitting person, a vehicle, a building, and a utility pole as the first type (Chapter 4.1. Annotation strategy: “the goal of this study is to quantify flood-water level based on objects partially submerged in water, the first step for defining the annotation strategy is to decide which objects we should consider for the classification task […] Based on the criteria we decided to consider these five classes of objects: Person, Car, Bus, Bicycle, and House”).
Regarding claim(s) 5, Nishimura as modified by Chaudhary teaches the non-transitory computer-readable recording medium according to claim 4, wherein the detecting of the first submersion position includes detecting at least one of a below-knee position, an above-knee position, a waist position, and a shoulder position of the upright person, a waist position or a shoulder position of the squatting person or the sitting person, a tire position, a window position, and submersion of the vehicle, an underfloor position, a floor position, a first floor position, and a second floor position of the building, and a place name and land number display, a pillar advertisement, and a hanging advertisement of the utility pole as the first submersion position, where Nishimura teaches Figure 3; Figure 4; Page 54, Right Col., 1st Paragraph: “we defined a method to detect submerged houses and cars from aerial images of flood disaster sites available from drones, and to determine their degree of submergence”; and Page 49, Right Col., 1st Paragraph – 2nd Paragraph: “As shown in Fig. 3, we define the criteria for determining the degree of submergence using the standard sizes of houses and vehicles as indicators […] estimate the degree of submergence of a shielded object on the assumption that (1) houses and cars with the same roof level are at the same elevation, and (2) houses and cars with the same elevation are at the same submergence”. Additionally, where Chaudhary teaches Table 1; Figure 2; Figure 4; Figure 5; Page 8, 2nd Paragraph: “We consider 11 flood levels, levels go from 0, which means no water, to 10, which represents a human body of average height completely submerged in water […] The height of the different levels is then inspired by drawing artists who use head height as the building block for the human figure […] We can now extend the annotation strategy to the other four different classes of objects by considering their average height”.
Relevant Prior Art Directed to State of Art
Watanabe (US 2023/0222642 A1) are relevant prior art not applied in the rejection(s) above. Watanabe discloses an inundation damage determination device comprising: a memory that stores a command to be executed by a processor; and the processor that executes the command stored in the memory, wherein the processor acquires an image including a water surface, detects a reference object, which is a reference for a height, from the image, acquires a total length and a position of the reference object, measures a length above the water surface of the reference object in the image and measures a water level of the water surface from a difference between the total length of the reference object and the length above the water surface of the reference object, stores the water level of the water surface and the position of the reference object in association with each other, acquires a position of a house which is a target of an inundation damage determination, decides an inundation water level of the house from the position associated with the water level of the water surface and the position of the house, and determines a degree of damage of the house from the inundation water level.
Du et al (US 11,555,701 B2) are relevant prior art not applied in the rejection(s) above. Du discloses a method of detecting a first floor height (FFH) of a first floor of a subject building relative to a terrain or surface of a parcel of land on which the subject building is located, the method comprising: obtaining information on a building footprint of the subject building on the parcel of land; applying an image of the subject building to a CNN-based AI engine, which has previously been trained, so as to identify a first floor of the subject building from the image, the CNN-based AI engine having previously been trained with other images of a plurality of other buildings, the other images including at least one of a front, a side, or a back-side view of individual buildings of the plurality of other buildings; analyzing the image with the CNN-based AI engine and determining the FFH of the subject building; extracting digital elevation map information of the terrain and/or surface from a dataset for the parcel of land; converting the FFH of the subject building to a first floor elevation (FFE) from the FFH and the digital elevation map information; and identifying a location at an adjacent grade point along the building footprint so as to determine an elevation of the location and the FFE of the subject building at the location.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONGBONG NAH whose telephone number is (571) 272-1361. The examiner can normally be reached M - F: 9:00 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL MISTRY can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONGBONG NAH/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674