Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This communication is filed in response to the action filed on 09/18/2025.
Claims 1-4, 6-8, 10-11 are currently amended. Claims 12-14 are new. Claims 1-14 are pending.
Response to Arguments
Applicant’s arguments filed on 09/18/2025 on pages 9-11, under REMARKS with respect to 35
35 U.S.C. 103 have been fully considered but they are not persuasive. Regarding claim 1 applicants on page 10 state that:
PNG
media_image1.png
609
735
media_image1.png
Greyscale
The examiner respectfully disagrees. The applicant is respectfully invited to look at primary reference US 2019/0377087 A1 to SHTROM et al. (Hereinafter “SHTROM”), particularly paragraphs [0073-0074], [0086-0087], [0094], [0101], [0122]. The examiner would argue that in paragraph [0086] the prior art references environment 100 which would act substantially as a candidate area for object detection on a ship or any other vehicle. The prior art uses synthetic radar aperture image technologies and optical imaging technologies to identify objects both mentioned throughout and in paragraphs [0086-0087] and [0122]. The images are captured at various times after a time period has passed as mentioned in paragraph [0094], the time periods being in close temporal proximity. The images are compared to where the object was predicted to be and given comparison accuracy scores as mentioned in paragraph [0101]. Based on this reasoning and the amendments to the claims necessitated changing examiner interpretation of the claims requiring a new grounds of rejection based on 35 U.S.C. 102 to US 2019/0377087 A1. Please see the full rejection to the claims below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 6, and 10-14 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by US 2019/0377087 A1 to SHTROM et al. (Hereinafter “SHTROM”).
As per claim 1, SHTROM discloses an image processing device comprising: at least one memory storing instructions (an electronic device 112 adapted to perform an image processing method comprising a memory component in device 112 which stores instructions related to said image processing method; figs 3 and 6; paragraphs [0068], [0072]); at least one processor configured to access the at least one memory and execute the instructions to (the electronic device 112 further comprises a processor adapted to execute said instruction stored in said memory; paragraphs [0072-0073]): set, as a candidate area, an area where there is a possibility that an object to be subjected to an annotation processing exists in a first image captured by a synthetic aperture radar (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image and the images are captured and found via synthetic aperture radar imaging technologies; paragraphs [0064-0067], [0087], [0102]); extract an image of an area relevant to the candidate area from a third image captured by the synthetic aperture radar at a time different from the first image (a series of images comprising an object where the object image features are extracted by the system using SAR imaging tech each image captured in succession after waiting a time period where that time period may be time stamped in increments in close temporal proximity within 1 ms, 5 ms, 10 ms, 50 ms, 100 ms, 150ms or 300 ms, etc.…; paragraphs [0087], [0094-0095], [0102]; claim 6); receive as an annotation area, information of an area on the first image in which the object to be subjected to the annotation processing exists (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image; paragraphs [0064-0067]); extract a second image including the annotation area, wherein the second image is an optical image (radar information 512 substantially acts as a second image modality of a different method than optical imagery, the radar is adapted to determine position of objects in order to statistically identify object 120; paragraphs [0083], [0093]); and output the first image in a state comparable with at least one of the second image and the third image (the captured SAR images are compared and the object projected position is given an accuracy rating as a percentage score and compares the series of images captured temporally proximal to one another by a set time period; paragraphs [0087], [0093-0094], [0101-0102]).
As per claim 6, SHTROM discloses an image processing method comprising: setting, as a candidate area, an area where there is a possibility that an object to be subjected to an annotation processing exists in a first image captured by a synthetic aperture radar (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image and the images are captured and found via synthetic aperture radar imaging technologies; paragraphs [0064-0067], [0087], [0102]); extracting an image of an area relevant to the candidate area from a third image captured by the synthetic aperture radar at a time different from the first image (a series of images including a third image is captured comprising an object where the object image features are extracted by the system using SAR imaging tech each image captured in succession after waiting a time period where that time period may be time stamped in increments in close temporal proximity within 1 ms, 5 ms, 10 ms, 50 ms, 100 ms, 150ms or 300 ms, etc.…; paragraphs [0087], [0094-0095], [0102]; claim 6); receiving, as an annotation area, information of an area on the first image in which the object to be subjected to the annotation processing exists (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image and the images are captured and found via synthetic aperture radar imaging technologies; paragraphs [0064-0067], [0087], [0102]); extracting a second image including the annotation area, wherein the second image is an optical image (radar information 512 substantially acts as a second image modality of a different method than optical imagery, which an optical image 710 is captured by an image sensor, the radar is adapted to determine position of objects in order to statistically identify object 120; paragraphs [0083], [0093]); and outputting the first image in a state comparable with at least one of the second image and the third image (the captured SAR images are compared and the object projected position is given an accuracy rating as a percentage score and compares the series of images captured temporally proximal to one another by a set time period; paragraphs [0087], [0094], [0101-0102]).
As per claim 10, SHTROM discloses a non-transitory program recording medium recording an image processing program for causing a computer to execute: setting, as a candidate area, an area where there is a possibility that an object to be subjected to an annotation processing exists in a first image captured by a synthetic aperture radar (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image and the images are captured and found via synthetic aperture radar imaging technologies; paragraphs [0064-0067], [0087], [0102]); extracting an image of an area relevant to the candidate area from a third image captured by the synthetic aperture radar at a time different from the first image (a series of images including a third image is captured comprising an object where the object image features are extracted by the system using SAR imaging tech each image captured in succession after waiting a time period where that time period may be time stamped in increments in close temporal proximity within 1 ms, 5 ms, 10 ms, 50 ms, 100 ms, 150ms or 300 ms, etc.…; paragraphs [0087], [0094-0095], [0102]; claim 6); receiving, as an annotation area, information of an area on a-the first image in which the object to be subjected to the annotation processing exists (an environment 100 (acting as the annotation area) and comprises environment information which is received via the electronic device 112 and is adapted to identify various measurements associated with various sensors related to an object 120 in an optical image acting as the first image and the images are captured and found via synthetic aperture radar imaging technologies; paragraphs [0064-0067], [0087], [0102]); extracting a second image including the annotation area (), wherein the second image is an optical image (radar information 512 substantially acts as a second image modality of a different method than optical imagery, which an optical image 710 is captured by an image sensor, the radar is adapted to determine position of objects in order to statistically identify object 120; paragraphs [0083], [0093]); and outputting the first image in a state comparable with at least one of the second image and the third image (the captured SAR images are compared and the object projected position is given an accuracy rating as a percentage score and compares the series of images captured temporally proximal to one another by a set time period; paragraphs [0087], [0094], [0101-0102]).
As per claim 11, SHTROM discloses the image processing device according to claim 1,wherein the object is a ship (the object is a ship and is configured to detect other ships; paragraph [0136]), the first image is an image captured by the synthetic aperture radar (the first image is a synthetic aperture radar image/ radar information; paragraphs [0093], [0102], [0107]; claim 5), the second image includes an optical image of the ship (on board image sensor / cameras provide optical images of the vehicle/ship as the second image; paragraphs [0094], [0099]), and the at least one processor is further configured to execute the instructions to: set the candidate area by specifying an area in which a state of reflected wave is different from surroundings in within an area where there is a possibility that the ship in the first image (the computing system is adapted to set the monitoring areas to the front or side portion of the vehicle in order to detect via radar a change in wavelengths reflected back to the radar system/reader which would indicate an objects presence, further the user may increase the detection are of the radar around the ship/vehicle by the location(s) or positions of the transmit and/or the receive antenna(s) may be selected to increase a horizontal and/or a vertical sensitivity and therefore increase detection/candidate area/environment size; paragraphs [0021], [0062], [0066], [0071], [0082]).
As per claim 12, SHTROM discloses the image processing device according to claim 1, wherein the third image is captured on a different date from the first image (where the optical image and the other sensor information have associated timestamps (a time stamp would include the month/date/year/time an image was captured optically or by SARs) that are concurrent or in close temporal proximity; paragraph [0094]).
As per claim 13, SHTROM discloses the image processing device according to claim 1, wherein the at least one processor is further configured to execute the instructions to extract the second image among optical images based on capturing position, capturing date and time, and cloud amount (where the optical image and the other sensor information have associated timestamps (a time stamp would include the month/date/year/time an image was captured optically or by SARs) that are concurrent or in close temporal proximity; paragraphs [0093-0094]).
As per claim 14, SHTROM discloses the image processing device according to claim 13, wherein the cloud amount of the second image is less than a threshold (a visibility threshold exists in relation to lumens and light captured by the sensor, wherein the more fog or cloudy weather there is the less light would come through to the sensor and thus an illuminance threshold works to determine cloudy weather and skies; paragraphs [0070-0071]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 2-5, and 7-9 are rejected under 35 § U.S.C. 103 as being obvious over US 2019/0377087 A1 to SHTROM et al. (Hereinafter “SHTROM”) in view of US 2021/0312190 A1 to FILLBRANDT et al. (hereinafter “FILLBRANDT”).
As per claim 2, SHTROM discloses the image processing device according to claim 1. SHTROM fails to disclose wherein the at least one processor is further configured to execute the instructions to: output an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state.
FILLBRANDT discloses wherein the at least one processor is further configured to execute the instructions to: output an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state (the system is adapted to provide based on video analyses data the presence of a moving object, the analysis device controlling camera modules 4 monitoring areas 5 of ship 12 compares images and/or image sequences in the video data; fig 2; paragraphs [0010], [0039-0041]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have output an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to detect a person falling overboard as suggested by FILLBRANDT paragraph [0034]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 2.
As per claim 3, SHTROM discloses the image processing device according to claim 1. SHTROM fails to disclose wherein the at least one processor is further configured to execute the instructions to: set a plurality of candidate areas by sliding an area on the first image.
FILLBRANDT discloses wherein the at least one processor is further configured to execute the instructions to: set a plurality of candidate areas by sliding an area on the first image (external areas 13 of ship 12 are monitored via camera modules 4 and are adapted to be controlled via a central analysis device which would set all of the modules on to cover each corresponding monitoring area 5 which each module is assigned to in order to detect man overboard scenarios and is provided over a user interface provided as a selection module which substantially acts the same way as a slide able selection tool; fig 2; paragraph [0039]; claim 4).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have ability to set a plurality of candidate areas by sliding an area on the first image of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to provide the ability via central monitoring device to observe and ensure all monitoring areas are covered by a module that is operating properly in order to ensure complete coverage of ship perimeter as suggested by FILLBRANDT paragraph [0039]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 3.
As per claim 4, SHTROM discloses the image processing device according to claim 1,wherein the at least one processor is further configured to execute the instructions to: acquire a plurality of pieces of image data including an area relevant to the annotation area (the computing system comprising the processing component is adapted to provide data structure 700 may include: one or more instances 708 of optional optical images 710, one or more associated instances of other sensor information 712 (such as radar information), one or more features of optional objects 714 identified in or associated with optical images 710 and other sensor information 712 and includes various other data features further stated in [0093] relating to the monitored area of the vehicle/ship; fig 7; paragraphs [0093], [0136]). SHTROM fails to disclose and generate the third image relevant to the first image including the annotation area by combining the plurality of pieces of image data.
FILLBRANDT discloses and generate the third image relevant to the first image including the annotation area by combining the plurality of pieces of image data (the images are combined as video data where each image may be compared of the falling object in order to determine size, and falling features of the object in order to determine if said object comprises a person overboard; paragraphs [0010], [0039]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have ability to generate the third image relevant to the first image including the annotation area of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to compare multiple different features of the image data to determine object type compared to reference images and reference data values allowing for an alarm to be sound if the object meets said reference values as suggested by FILLBRANDT at paragraphs [0039-0040]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 4.
As per claim 5, SHTROM discloses the image processing device according to claim 1. SHTROM fails to disclose wherein the at least one processor is further configured to execute the instructions to: receive an input as to whether to perform comparison with the second image for each of the first images; extract the second image when information indicating that comparison with the second image is to be performed is input; and output the first image and the second image in a comparable state.
FILLBRANDT discloses wherein the at least one processor is further configured to execute the instructions to: receive an input as to whether to perform comparison with the second image for each of the first images (the central analysis system is adapted to be based on reference values of falling objects determine whether to compare two image frames comprising a falling object to determine if said object is a man over board object; paragraphs [0010], [0038-0040]); extract the second image when information indicating that comparison with the second image is to be performed is input (the second image of the video sequence wherein video is a sequence of repeatedly captured images is determined to be compared to a first image of the captured video comprising the falling object; paragraphs [0010], [0034-0036]); and output the first image and the second image in a comparable state (the system is adapted to provide based on video analyses data the presence of a moving object, the analysis device controlling camera modules 4 monitoring areas 5 of ship 12 compares images and/or image sequences in the video data; fig 2; paragraphs [0010], [0039-0041]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have the ability to receive an input as to whether to perform comparison with the second image for each of the first images of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to provide the ability to alert users/captain of the ship if an item has fallen overboard as suggested by FILLBRANDT paragraph [0010]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 5.
As per claim 7, SHTROM discloses the image processing method according to claim 6. SHTROM fails to disclose further comprising: outputting an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state.
FILLBRANDT discloses further comprising: outputting an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state (the system is adapted to provide based on video analyses data the presence of a moving object, the analysis device controlling camera modules 4 monitoring areas 5 of ship 12 compares images and/or image sequences in the video data; fig 2; paragraphs [0010], [0039-0041]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have outputting an image of the candidate area of the first image and an image of an area relevant to the candidate area of the third image in a comparable state of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to detect a person falling overboard as suggested by FILLBRANDT paragraph [0034]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 7.
As per claim 8, SHTROM discloses the image processing method according to claim 6 further comprising: acquiring a plurality of pieces of image data including an area relevant to the annotation area (the computing system comprising the processing component is adapted to provide data structure 700 may include: one or more instances 708 of optional optical images 710, one or more associated instances of other sensor information 712 (such as radar information), one or more features of optional objects 714 identified in or associated with optical images 710 and other sensor information 712 and includes various other data features further stated in [0093] relating to the monitored area of the vehicle/ship; fig 7; paragraphs [0093], [0136]). SHTROM fails to disclose and generating the third image relevant to the first image including the annotation area by combining the plurality of pieces of image data.
FILLBRANDT discloses and generating the third image relevant to the first image including the annotation area by combining the plurality of pieces of image data (the images are combined as video data where each image may be compared of the falling object in order to determine size, and falling features of the object in order to determine if said object comprises a person overboard; paragraphs [0010], [0039]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have ability to generate the third image relevant to the first image including the annotation area of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to compare multiple different features of the image data to determine object type compared to reference images and reference data values allowing for an alarm to be sound if the object meets said reference values as suggested by FILLBRANDT at paragraphs [0039-0040]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 8.
As per claim 9, SHTROM discloses the image processing method according to claim 6. SHTROM fails to disclose further comprising: receiving an input as to whether to perform comparison with the second image for each of the first images; extracting the second image when information indicating that comparison with the second image is to be performed is input; and outputting the first image and the second image in a comparable state.
FILLBRANDT discloses further comprising: receiving an input as to whether to perform comparison with the second image for each of the first images (the central analysis system is adapted to be based on reference values of falling objects determine whether to compare two image frames comprising a falling object to determine if said object is a man over board object; paragraphs [0010], [0038-0040]); extracting the second image when information indicating that comparison with the second image is to be performed is input (the second image of the video sequence wherein video is a sequence of repeatedly captured images is determined to be compared to a first image of the captured video comprising the falling object; paragraphs [0010], [0034-0036]); and outputting the first image and the second image in a comparable state (the system is adapted to provide based on video analyses data the presence of a moving object, the analysis device controlling camera modules 4 monitoring areas 5 of ship 12 compares images and/or image sequences in the video data; fig 2; paragraphs [0010], [0039-0041]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify SHTROM to have the ability to receive an input as to whether to perform comparison with the second image for each of the first images of FILLBRANDT reference. The Suggestion/motivation for doing so would have been to provide the ability to alert users/captain of the ship if an item has fallen overboard as suggested by FILLBRANDT paragraph [0010]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine FILLBRANDT with SHTROM to obtain the invention as specified in claim 9.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677