DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
1 This action is in response to the amendment filed on 1/15/2026. Claims 1-2, 4, 6, 13-14 have been amended, claims 15-17 have been cancelled, and claims 18-22 have been added. Claims 1-14 remain rejected, and claims 18-22 are rejected.
Response to Arguments
2 Applicant’s arguments with respect to independent claims 1, 13, and 14 filed on 1/15/2026, with respect to the rejection under 35 USC § 102 regarding that the prior art does teach the following but not limited to “while not assigning the same identification information to any image having not been used for the composition processing”. This argument has been considered, but are moot due to new grounds of rejections under 35 USC § 103.
3 Regarding arguments under claims 2-12, they directly/indirectly depend on independent claim 1 respectively. Applicant does not argue anything other than independent claims 1, 13, and 14. The limitations in those claims, in conjunction with combination, was mostly previously established as explained, with a few changes being adjusted to connect with the changes of the independent claims.
4 Claims 15-17 has been cancelled by the applicant as mentioned previously, therefore the claims will not be reviewed further.
5 Claims 18-22 are new claims that were added, and are dependent of the independent claims 1 and 13. They are considered, but are moot under new and similar grounds of rejection under 35 USC § 103.
Claim Rejections - 35 USC § 103
6 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8 Claim(s) 1-3, 7-8, 12-14, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yumibe et al. (JP 2015001756 A) in view of Higashiyama et al. (US 20150358546 A1).
9 Regarding claim 1, Yumibe teaches an information processing apparatus comprising: a processor ([Claim 1] reciting “A state change management system comprising an imaging unit that captures an image of a target object, a position information acquisition unit that measures a position of the imaging unit, a posture information acquisition unit that measures a posture of the imaging unit, one or more processors, and one or more memories connected to the one or more processors…”), wherein the processor
acquires an image group captured with overlapping imaging ranges ([0110] reciting “First, the facility image model construction program 310 detects an overlapping region of two images (for example, an overlapping region 2410 of a portion 2408 and a portion 2409 in FIG. 24) from a plurality of images pasted on the omnidirectional image model to be processed…”),
performs composition processing on the acquired image group ([0176] reciting “The photographing condition addition program 309 calculates photographing conditions (facility ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal at the time of photographing, photographing date and time, photographing position information, and the like) from the photographed image and the acquired information, adds the photographing conditions to the photographed image, and stores the photographing conditions in the memory 206 in the terminal (step 3505). The processing contents of the photographing condition addition program 309 are the same as those in the first embodiment”),
according to a result of the composition processing, assigns same identification information to images having been used for the composition processing of a same imaging target, ([0099] reciting “First, the facility image model construction program 310 refers to the photographing conditions added to the images from the facility image data for all the facilities uploaded to the server system 100, and searches for image data having the same facility ID photographed in the same time zone (step 220 0). Here, the "image data having the same facility ID photographed in the same time zone" is a plurality of image data of one target facility 211 photographed in one patrol inspection.”; [0204] reciting “The portable information terminal 101 specifies the coordinate value on the map of the position designated by the patroller to input the target facility 211, specifies the facility ID of the input target facility 211 by collating the specified coordinate value with the facility information data 400 (step 3604), and transmits the result to the server system 100 (step 3605).”), , and
attaches the assigned identification information as metadata to each of the images having been used for the composition processing, by adding the assigned identification information to image data of each of the images as accessory information ([0030] reciting “In the present embodiment, Exif ( Exchangeable image file format ) is adopted as the format of the image 704. The Exif format is an image format including metadata for photographs, and various types of metadata can be added to photographs. A specific example will be described with reference to FIG. 8.”; [0040] reciting “In the facility image model 402, an omnidirectional image model of a target facility generated from a plurality of images obtained by photographing the target facility from arbitrary positions and directions at the time of patrol and inspection in the power distribution facility is held in time series, and the facility ID1100 and the omnidirectional image models 1101, 1102, and 1104 in time series are stored in association with each other. In the example of FIG. 11, an omnidirectional image model for each year (that is, an omnidirectional image model generated from image data captured in each year) is stored…”; [0050] reciting “The photographing condition addition program 309 calculates photographing conditions (equipment ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal 101 at the time of photographing, photographing date and time, photographing position information, and the like) from the acquired information transmitted from the portable information terminal 101, and adds the photographing conditions to the Exif information of the photographed image as metadata (step 1302).”).
10 Yumibe does not explicitly teach while not assigning the same identification information to any image having not been used for the composition processing…
11 Higashiyama teaches while not assigning the same identification information to any image having not been used for the composition processing ([0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”)…
12 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe) to incorporate the teachings of Higashiyama to provide a method that can determine when to use specific IDs for only established compositions, while utilizing the IDs and the composition groups provided by the teachings of Yumibe. Doing so would allow multiple composition images generated in the series of image-capturing can be composited in series-composition easily as stated by Higashiyama ([0157] recited).
13 Regarding claim 2, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor
performs panorama composition processing on the acquired image group (Yumibe; [0006] reciting “The present invention has been made in view of the above problems, and an object of the present invention is to provide a state change management system and a state change management method that automatically generate an omnidirectional image model of a target object on the basis of positioning information acquired by a mobile terminal having an imaging function and a positioning function and a plurality of captured images obtained by imaging the target object from arbitrary positions and directions”),
assigns the same identification information to images constituting a composite region (Yumibe; [0099] reciting “First, the facility image model construction program 310 refers to the photographing conditions added to the images from the facility image data for all the facilities uploaded to the server system 100, and searches for image data having the same facility ID photographed in the same time zone (step 220 0). Here, the "image data having the same facility ID photographed in the same time zone" is a plurality of image data of one target facility 211 photographed in one patrol inspection.”; [0204] reciting “The portable information terminal 101 specifies the coordinate value on the map of the position designated by the patroller to input the target facility 211, specifies the facility ID of the input target facility 211 by collating the specified coordinate value with the facility information data 400 (step 3604), and transmits the result to the server system 100 (step 3605).”), while not assigning the same identification information to any image not constituting the composite region (Higashiyama; ([0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”), and
attaches the assigned identification information to the images as the accessory information (Yumibe; [0030] reciting “In the present embodiment, Exif ( Exchangeable image file format ) is adopted as the format of the image 704. The Exif format is an image format including metadata for photographs, and various types of metadata can be added to photographs. A specific example will be described with reference to FIG. 8.”; [0040] reciting “In the facility image model 402, an omnidirectional image model of a target facility generated from a plurality of images obtained by photographing the target facility from arbitrary positions and directions at the time of patrol and inspection in the power distribution facility is held in time series, and the facility ID1100 and the omnidirectional image models 1101, 1102, and 1104 in time series are stored in association with each other. In the example of FIG. 11, an omnidirectional image model for each year (that is, an omnidirectional image model generated from image data captured in each year) is stored…”; [0050] reciting “The photographing condition addition program 309 calculates photographing conditions (equipment ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal 101 at the time of photographing, photographing date and time, photographing position information, and the like) from the acquired information transmitted from the portable information terminal 101, and adds the photographing conditions to the Exif information of the photographed image as metadata (step 1302).”).
14 Regarding claim 3, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 2 (see claims 1-2 rejections above), wherein the processor further
acquires information on the region (Yumibe; [Claim 8] reciting “The second processor executes the first procedure, the second procedure, and the third procedure on the basis of the information acquired from the imaging unit, the position information acquisition unit…”), and
assigns information for specifying the region as the identification information ([0015] reciting “The portable information terminal 101 includes a wireless communication unit 200, a display unit 201 such as a display that presents a captured image, facility information, and the like to a user, a date and time obtaining unit 202 such as a watch that obtains date and time, a position information obtaining unit 203 that obtains position information from GPS satellites 210, an imaging unit 209 such as a digital still camera that images a target facility 211, a CPU ( Central Processing Unit ) 204 that controls the entire process”; [0016] reciting “In addition, as will be described later, posture information of the portable information terminal 101 (that is, the photographing unit 209) is acquired on the basis of the acceleration measured by the three axis acceleration sensor 207 and the geomagnetism measured by the three axis geomagnetic sensor 208.”; [0069] reciting “the patroller may visually read the information and input the information to the portable information terminal 101, or the portable information terminal 101 may read the information from the captured image. Alternatively, as will be described later as a fifth embodiment, the patroller may input information indicating the position of the target facility 211, and the portable information terminal 101 or the server system 100 may specify the facility ID of the target facility 211 according to the information.”).
15 Regarding claim 7, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1 (see claim 1 rejection above), wherein the processor further
acquires information on a result of image analysis for the image (Yumibe; [0021] reciting “At the same time, the change detection program 311 refers to the imaging conditions of the captured image transmitted from the portable information terminal 101, searches the database for a past omnidirectional image model of the corresponding facility, cuts out a portion corresponding to the facility portion captured in the captured image from the omnidirectional image model, matches the cut-out portion with the captured image to detect a secular change, and transmits the detection result to the portable information terminal 101. The portable information terminal 101 presents the detection result received from the server system 100 to the patroller by displaying the detection result on the display unit 201.”), and
adds the acquired information on the result of the image analysis to the accessory information to be attached to the image ([0021] reciting “In addition, the facility image model construction program 310 generates an omnidirectional image model of the facility from a plurality of captured images to which the imaging conditions are added and which are accumulated in the database 303, and manages the omnidirectional image model in time series (that is, in association with the imaging time of the captured image).”).
16 Regarding claim 8, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 7 (see claims 1 and 7 rejections above),
wherein the information on the result of the image analysis includes at least one of information on a detection result by the image analysis (Yumibe; [0051] reciting “The change detection program 311 refers to the imaging condition added to the captured image in step 1302, searches the past omnidirectional image model of the corresponding facility from the facility image model data 402 of the database 303, cuts out a portion corresponding to the facility portion captured in the captured image from the omnidirectional image model, matches the cut-out portion with the captured image to detect a secular change of the target facility 211, and transmits a detection result to the portable information terminal (step 1303).”), information on a type determination result by the image analysis, or information on a measurement result by the image analysis.
17 Regarding claim 12, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1 (see claim 1 rejections above) wherein the accessory information is used for searching for the image (Yumibe; [0075] reciting “The change detection program 311 reads the facility ID from the photographed image to which the photographing condition is added by the photographing condition addition program 309, and searches the facility image model data 402 of the database 303 for a time-series omnidirectional image model corresponding to the facility ID (step 1700).”).
18 Claims 13 and 14 has similar limitations as of claim 1, therefore it is rejected under the same rationale as claim 1.
19 Regarding claim 18, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1, wherein the processor further assigns predetermined information to the image having not been used for the composition processing (Higashiyama; [Abstract] reciting “a control unit configured such that when the control unit obtains information indicating that a predetermined blur, a change in a predetermined subject brightness, or a change in a color of a predetermined subject is detected based on a detection result, the control unit controls the composite unit so as to generate a composition image made by compositing a plurality of images captured before the detection.”; [0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”), and
attaches the assigned predetermined information as metadata to the image having not been used for the composition processing (Higashiyama; [0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”), by adding the assigned predetermined information to image data of the image as accessory information ([0030] reciting “In the present embodiment, Exif ( Exchangeable image file format ) is adopted as the format of the image 704. The Exif format is an image format including metadata for photographs, and various types of metadata can be added to photographs. A specific example will be described with reference to FIG. 8.”; [0040] reciting “In the facility image model 402, an omnidirectional image model of a target facility generated from a plurality of images obtained by photographing the target facility from arbitrary positions and directions at the time of patrol and inspection in the power distribution facility is held in time series, and the facility ID1100 and the omnidirectional image models 1101, 1102, and 1104 in time series are stored in association with each other. In the example of FIG. 11, an omnidirectional image model for each year (that is, an omnidirectional image model generated from image data captured in each year) is stored…”; [0050] reciting “The photographing condition addition program 309 calculates photographing conditions (equipment ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal 101 at the time of photographing, photographing date and time, photographing position information, and the like) from the acquired information transmitted from the portable information terminal 101, and adds the photographing conditions to the Exif information of the photographed image as metadata (step 1302).”).
20 Claim 19 has similar limitations as of claim 18, therefore it is rejected under the same rationale as claim 18.
21 Claim 20 has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2.
22 Claim(s) 4-5 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yumibe et al. (JP 2015001756 A) in view of Higashiyama et al. (US 20150358546 A1) as of claim 1 and 13, further in view of Kikuchi et al. (US 20180300868 A1) and Khan et al. (US 20170294021 A1).
23 Regarding claim 4, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1, wherein the processor
,
,
assigns the same identification information to images constituting the extracted regions (Yumibe; [0099] reciting “First, the facility image model construction program 310 refers to the photographing conditions added to the images from the facility image data for all the facilities uploaded to the server system 100, and searches for image data having the same facility ID photographed in the same time zone (step 220 0). Here, the "image data having the same facility ID photographed in the same time zone" is a plurality of image data of one target facility 211 photographed in one patrol inspection.”; [0177] reciting “The change detection program 311 refers to the imaging conditions added to the captured image in step 3505, searches for the past omnidirectional image model of the corresponding facility from the facility image model data downloaded to the memory 206, cuts out a portion corresponding to the facility portion captured in the captured image from the searched omnidirectional image model, and matches the cut-out portion with the captured image to detect the secular change of the target facility 211 (step 3506).”; [0204] reciting “The portable information terminal 101 specifies the coordinate value on the map of the position designated by the patroller to input the target facility 211, specifies the facility ID of the input target facility 211 by collating the specified coordinate value with the facility information data 400 (step 3604), and transmits the result to the server system 100 (step 3605).”), while not assigning the same identification information to any image not constituting the extracted regions (Higashiyama; ([0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”), and
attaches the assigned identification information to the images as the accessory information (Yumibe; [0030] reciting “In the present embodiment, Exif ( Exchangeable image file format ) is adopted as the format of the image 704. The Exif format is an image format including metadata for photographs, and various types of metadata can be added to photographs. A specific example will be described with reference to FIG. 8.”; [0040] reciting “In the facility image model 402, an omnidirectional image model of a target facility generated from a plurality of images obtained by photographing the target facility from arbitrary positions and directions at the time of patrol and inspection in the power distribution facility is held in time series, and the facility ID1100 and the omnidirectional image models 1101, 1102, and 1104 in time series are stored in association with each other. In the example of FIG. 11, an omnidirectional image model for each year (that is, an omnidirectional image model generated from image data captured in each year) is stored…”; [0050] reciting “The photographing condition addition program 309 calculates photographing conditions (equipment ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal 101 at the time of photographing, photographing date and time, photographing position information, and the like) from the acquired information transmitted from the portable information terminal 101, and adds the photographing conditions to the Exif information of the photographed image as metadata (step 1302).”).
24 Yumibe in view of Higashiyama does not explicitly teach performs three dimensional composition processing on the acquired image group, extracts regions constituting a same surface of an object from a result of the three dimensional composition processing…
25 Kikuchi teaches performs three dimensional composition processing on the acquired image group ([0083] reciting “In this embodiment, the first space information acquisition unit 322 calculates three-dimensional coordinates of the structure on the basis of the image data indicating the first image I.sub.L and the second image I.sub.R with parallax captured by the twin-lens camera 202, and acquires the calculated three-dimensional coordinates as the first space information on the structure.”).
26 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama) to incorporate the teachings of Kikuchi to provide a method to have a processor perform three-dimensional composition for the image groups taught by Yumibe in view of Higashiyama. Doing so would allow the method(s) to be capable of easily and accurately acquiring member identification indicating a member included in an image of a structure as stated by Kikuchi ([Abstract] recited).
27 Yumibe in view of Higashiyama and Kikuchi does not explicitly teach to extracts regions constituting a same surface of an object from a result of the three dimensional composition processing…
28 Khan teaches to extracts regions constituting a same surface of an object from a result of the three dimensional composition processing… ([0012] reciting “…refining depths of pixels included in an edge segment based on a surface shape of an object at which the edge segment included in the image is located, refining depths of pixels included in a same surface based on depth ensembles in the pixels included in the same surface of the object, and refining the depths of pixels included in the edge segment which is located at the object based on three-dimensional characteristics of the object.”).
29 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama and Kikuchi) to incorporate the teachings of Khan to provide a method that can obtain or extract certain regions or segments with the same surface of an object from the 3D processing taught by Yumibe in view of Higashiyama and Kikuchi. Doing so would refine depths of sparse depth images in a multi-aperture camera as stated by Khan ([0012] recited).
30 Regarding claim 5, Yumibe in view of Higashiyama, Kikuchi, and Khan teaches the information processing apparatus according to claim 4 (see claims 1 and 4 rejections above), wherein the processor further
acquires information on the region (Yumibe; [Claim 8] reciting “The second processor executes the first procedure, the second procedure, and the third procedure on the basis of the information acquired from the imaging unit, the position information acquisition unit…”), and
assigns information for specifying the region as the identification information (Yumibe; ([0015] reciting “The portable information terminal 101 includes a wireless communication unit 200, a display unit 201 such as a display that presents a captured image, facility information, and the like to a user, a date and time obtaining unit 202 such as a watch that obtains date and time, a position information obtaining unit 203 that obtains position information from GPS satellites 210, an imaging unit 209 such as a digital still camera that images a target facility 211, a CPU ( Central Processing Unit ) 204 that controls the entire process”; [0016] reciting “In addition, as will be described later, posture information of the portable information terminal 101 (that is, the photographing unit 209) is acquired on the basis of the acceleration measured by the three axis acceleration sensor 207 and the geomagnetism measured by the three axis geomagnetic sensor 208.”; [0069] reciting “the patroller may visually read the information and input the information to the portable information terminal 101, or the portable information terminal 101 may read the information from the captured image. Alternatively, as will be described later as a fifth embodiment, the patroller may input information indicating the position of the target facility 211, and the portable information terminal 101 or the server system 100 may specify the facility ID of the target facility 211 according to the information.”).
31 Claim 21 has similar limitations as of claim 4, therefore it is rejected under the same rationale as claim 4.
32 Claim(s) 6 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yumibe et al. (JP 2015001756 A) in view of Higashiyama et al. (US 20150358546 A1) as of claim 1 and 13, further in view of Kikuchi et al. (US 20180300868 A1).
33 Regarding claim 6, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 1, wherein the processor
,
extracts regions of a same member of an object from a result of the composition processing (Yumibe; [Claim 1] reciting “…each of the members constituting the structure in the global coordinate system and member identification information indicating each of the members are registered in an associated manner; a member identification information acquisition unit that specifies, on the basis of the second space information indicating the member and transformed to be in the global coordinate system…and a member specification unit that detects a plane region from the first space information or the second space information, and that assumes the detected plane region as a region indicating one member, wherein the first space information indicating the member or the second space information indicating the member is space information indicating the plane region assumed as the region indicating the member by the member specification unit, and wherein the member included in an image captured by the imaging device is specified on the basis of the member identification information acquired by the member identification information acquisition unit.”),
assigns the same identification information to images constituting the extracted regions (Yumibe; [0099] reciting “First, the facility image model construction program 310 refers to the photographing conditions added to the images from the facility image data for all the facilities uploaded to the server system 100, and searches for image data having the same facility ID photographed in the same time zone (step 220 0). Here, the "image data having the same facility ID photographed in the same time zone" is a plurality of image data of one target facility 211 photographed in one patrol inspection.”; [0204] reciting “The portable information terminal 101 specifies the coordinate value on the map of the position designated by the patroller to input the target facility 211, specifies the facility ID of the input target facility 211 by collating the specified coordinate value with the facility information data 400 (step 3604), and transmits the result to the server system 100 (step 3605).”), while not assigning the same identification information to any image not constituting the extracted regions (Higashiyama; ([0145] reciting “This Group ID is a unique ID that is set in each of the series of image-capturing in accordance with a single image-capturing command (SW2), and is generated from a random number, a current time, and the like.”; [0157] reciting “Therefore, the user can determine whether to obtain each of the composition images before and after the blur or to obtain a composition image obtained by compositing the composition images before and after the blur into the single composition image after the user sees the composition image. Further, by giving the same Group ID to multiple images captured in the series of starlit sky track, only the composition images in the same Group captured can be played back successively or as a list in response to the same image-capturing command during play back.”), and
attaches the assigned identification information to the images as the accessory information (Yumibe; [0030] reciting “In the present embodiment, Exif ( Exchangeable image file format ) is adopted as the format of the image 704. The Exif format is an image format including metadata for photographs, and various types of metadata can be added to photographs. A specific example will be described with reference to FIG. 8.”; [0040] reciting “In the facility image model 402, an omnidirectional image model of a target facility generated from a plurality of images obtained by photographing the target facility from arbitrary positions and directions at the time of patrol and inspection in the power distribution facility is held in time series, and the facility ID1100 and the omnidirectional image models 1101, 1102, and 1104 in time series are stored in association with each other. In the example of FIG. 11, an omnidirectional image model for each year (that is, an omnidirectional image model generated from image data captured in each year) is stored…”; [0050] reciting “The photographing condition addition program 309 calculates photographing conditions (equipment ID, photographing distance, photographing direction, attitude angles (A, B, C) of the portable information terminal 101 at the time of photographing, photographing date and time, photographing position information, and the like) from the acquired information transmitted from the portable information terminal 101, and adds the photographing conditions to the Exif information of the photographed image as metadata (step 1302).”).
34 Yumibe in view of Higashiyama does not explicitly teach performs three dimensional composition processing on the acquired image group, and extracts regions of a same member of an object from a result of the three dimensional composition processing…
35 Kikuchi teaches performs three dimensional composition processing on the acquired image group, and extracts regions of a same member of an object from a result of the three dimensional composition processing ([0083] reciting “In this embodiment, the first space information acquisition unit 322 calculates three-dimensional coordinates of the structure on the basis of the image data indicating the first image I.sub.L and the second image I.sub.R with parallax captured by the twin-lens camera 202, and acquires the calculated three-dimensional coordinates as the first space information on the structure.”; [0089] reciting “The member specification unit 322D detects a plane region from the first space information acquired by the first space information acquisition unit 322, and assumes the detected plane region as a region indicating one member. In this embodiment, the member specification unit 322D has a function of detecting a plane region on the basis of the three-dimensional coordinates of the plurality of feature points calculated by the three-dimensional coordinate calculation unit 322C, and classifying the plurality of feature points on a plane region (member) basis.”)…
36 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama) to incorporate the teachings of Kikuchi to provide a method to have a processor perform three-dimensional composition for the image groups taught by Yumibe in view of Higashiyama. Doing so would allow the method(s) to be capable of easily and accurately acquiring member identification indicating a member included in an image of a structure as stated by Kikuchi ([Abstract] recited).
37 Claim 22 has similar limitations as of claim 6, therefore it is rejected under the same rationale as of claim 6.
38 Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yumibe et al. (JP 2015001756 A) in view of Higashiyama et al. (US 20150358546 A1) as of claim 1 and 7-8, further in view of Liu et al. (US 20200293830 A1).
39 Regarding claim 9, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 8 (see claims 1 and 7-8 rejections above), but does not explicitly teach wherein the information on the detection result by the image analysis includes at least one of information on a detection result of a defect or information on a detection result of a damage.
40 Liu teaches wherein the information on the detection result by the image analysis includes at least one of information on a detection result of a defect or information on a detection result of a damage ([0017] reciting “A detection model is built by using a first sub-model and a second sub-model that are cascaded; the first sub-model uses images of a detected article that are obtained at different angles and generated in time order as inputs, to obtain feature processing results of the images, and outputs the feature processing results to the second sub-model; and the second sub-model performs time series analysis on the feature processing results of the images to determine a damage detection result. As such, damage on the detected article can be found more comprehensively by using the images at different angles, and damage found in the images can be combined into a uniform detection result through time series analysis, thereby greatly improving damage detection accuracy.”).
41 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama) to incorporate the teachings of Liu to provide a method to have the information results taught by Yumibe in view of Higashiyama to be specifically related to damage that was detected. Doing so would allow more accurate price estimation depending on the item as stated by Liu ([0061] recited).
42 Regarding claim 10, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 8 (see claims 1 and 7-8 rejections above), but does not explicitly teach wherein the information on the type determination result by the image analysis includes at least one of information on a defect type determination result or information on a damage type determination result.
43 Liu teaches wherein the information on the type determination result by the image analysis includes at least one of information on a defect type determination result or information on a damage type determination result ([0017] reciting “A detection model is built by using a first sub-model and a second sub-model that are cascaded; the first sub-model uses images of a detected article that are obtained at different angles and generated in time order as inputs, to obtain feature processing results of the images, and outputs the feature processing results to the second sub-model; and the second sub-model performs time series analysis on the feature processing results of the images to determine a damage detection result. As such, damage on the detected article can be found more comprehensively by using the images at different angles, and damage found in the images can be combined into a uniform detection result through time series analysis, thereby greatly improving damage detection accuracy.”; [0022] reciting “For example, the damage detection result can be a classification result indicating whether there is damage on the detected article, can be a degree of a certain type of damage on the detected article, can be a classification result indicating whether there are two or more types of damage on the detected article, or can be degrees of two or more types of damage on the detected article. Types of damage can include scratches, damage, stains, adhesives, etc. Sample data can be labeled based on a determined form of the damage detection result, and the damage detection result in this form can be obtained by using the trained detection model.”).
44 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama) to incorporate the teachings of Liu to provide a method to have the information results taught by Yumibe in view of Higashiyama to be specifically related to certain types of damage that was detected by a specific detector. Doing so would allow more accurate price estimation depending on the item as stated by Liu ([0061] recited).
45 Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yumibe et al. (JP 2015001756 A) in view of Higashiyama et al. (US 20150358546 A1) as of claims 1 and 7-8, further in view of Do et al. (US 20210390677 A1).
46 Regarding claim 11, Yumibe in view of Higashiyama teaches the information processing apparatus according to claim 8 (see claims 1 and 7-8 rejections above), but does not explicitly teach wherein the information on the measurement result by the image analysis includes at least one of information on a measurement result related to a size of a defect, information on a measurement result related to a size of a damage, information on a measurement result related to a shape of the defect, or information on a measurement result related to a shape of the damage.
47 Do teaches wherein the information on the measurement result by the image analysis includes at least one of information on a measurement result related to a size of a defect ([0086] reciting “The visualization of the object along with the overlay is sometimes referred to herein as a composite object image 170. The complementary information can take varying forms including, for example, position information (e.g., location of barcodes, location of text, locations of features, locations of components, etc.), defect information (e.g. the location, size, severity, etc. of imperfections identified by the image analysis inspection tools)”), information on a measurement result related to a size of a damage, information on a measurement result related to a shape of the defect, or information on a measurement result related to a shape of the damage.
48 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Yumibe in view of Higashiyama) to incorporate the teachings of Do to provide a method that can get the information of a defect which can be the size that is related to the image, where the image is obtain by the teachings of Yumibe in view of Higashiyama. Doing so would allow the inspection modules to be utilized and the graphical user interfaces can be rendered on various local and remote computing devices either in real-time/near-real time as well as on-demand as stated by Do ([0086] recited).
Conclusion
49 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
50 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/ Examiner, Art Unit 2614
/KENT W CHANG/ Supervisory Patent Examiner, Art Unit 2614