DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 09/30/2025 have been fully considered but they are not persuasive.
Generally, Examiner notes that Applicant's representative appears to have discovered the fun of using Chat GPT to generate 72 pages of arguments. However, the arguments appear to be formulaic, they represent proposed summaries of the Specification and prior art references without addressing the specific claim language or the specific reasons provided for rejection of that claim language on the basis of obviousness. More importantly, these arguments did not help the Examiner to identify potentially allowable subject matter in the claims or particularly point out features that Applicant itself believes to be an inventive improvement over the applications of the prior art.
Applicant argues: “The Upendran field of use is disparate from EarthCam. Upendran field of use is with residential buildings and small commercial buildings. EarthCam field of use is with monitoring a construction worksite to support safety.”
Examiner notes that field of use considered in the MPEP is not directed to the location of use, but rather whether prior art reference is in the field of the inventor’s endeavor. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). Therefore, using Applicant’s invention at a different location has no bearing on the field of the inventor’s endeavor, which is application of cameras for monitoring.
Applicant argues: “Walkaround A suspension bridge can be longer than 4,000 feet, higher than 500 feet and wider than 75 feet. A sports complex can occupy more than 150 acres. … The quality of the photos would most likely be questionable at best.”
Examiner notes that a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.
Further, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985).
Regarding the newly amended language, Applicant argues: “Upendran does not suggest or teach a fixed location camera installed two hundred feet high or more and a thousand feet or more from a worksite.”
Examiner notes that this feature is addressed in the updated reasons for rejection below.
Applicant argues on pages 13-15: “Overlay Guide Upendran provides a semi-transparent overlay guide for the user for each of four sides of a building. There is a residential and a small commercial building guide. The user matches a corner of the image with a corner in the overlay guide for each of the four sides of a building. … Unlike Upendran, the EarthCam camera cannot use an overlay guide because the architectural designs and construction work sites in the EarthCam field of use are often intended to be unique and distinctive.”
Examiner notes that describing an opinion of what “the EarthCam camera cannot use” does not limit the claim to a particular method step.
Applicant argues on page 14: “Upendran does not suggest or teach using an overlay guide to create a quality photograph of worksites to support safety, performance, industry best practices, structural integrity, and regulatory compliance at the worksite. • Upendran does not suggest or teach the use or storing of benchmark images. • Upendran does not suggest or teach the use of benchmark images for focusing the camera. • Upendran does not suggest or teach the use or storing of all images. • Upendran does not suggest or teach the use or storing of all images with an image chain of custody.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 15-16: “Upendran uses the sequential overlay guide to provide the user with a sample guide image which matches the perspective of the user. In the EarthCam field of use, an operator identifies a ground truth target object at the worksite. The image of the target object is used to focus the camera and correctly align the camera view.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. See reasons for rejection below.
Applicant argues on page 16: “Upendran does not suggest or teach a camera installed at a fixed site. • Upendran does not suggest or teach the use of a target object at a property. • Upendran does not suggest or teach the use of a target object at a property to assist in focusing the camera. • Upendran does not suggest or teach the use of a target object at a property to create a target object benchmark image to assist in focusing the camera. • Upendran does not suggest or teach the use of an onsite operator and a remote operator to work in concert to focus a camera.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on pages 16-18: “APPLICANT CLAIM STEP 1 c. executing a focus operation based on a predetermined camera system focus specification; In Upendran, "Therefore, image quality includes, but is not limited to, predetermined acceptable camera-based parameters (e.g., lighting, resolution, distance, movement, angle, etc.), … In Shanmugam, the disclosed image processing device may control the image capture device to capture a first image … In EarthCam focus settings and image quality are determined by an operator in several stages. … EarthCam operators set camera focus and image parameters for each individual camera and for each worksite.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. See reasons for rejection below.
Applicant argues on page 16: • Upendran does not suggest or teach a user uniquely setting the camera focus and image parameters for thousands of images over a long period time which is specific to property. • Shanmugam does not suggest or teach setting the focus of the image capture device before an image capture device becomes operational. • Shanmugam does not suggest or teach setting the initial focus of the image capture device in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device. • Shanmugam does not suggest or teach a manual and automatic focus setting where the two settings are used to confirm the accuracy of the final ground truth focus setting. • Shanmugam does not suggest or teach accurately setting the initial focus of an image capture device to support worksite safety and regulatory compliance.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 18-20: “APPLICANT CLAIM STEP 1 d. capturing an onsite image, including the onsite target object, using the predetermined focus specification; In Upendran a picture is taken when the image is substantially aligned with the overlay. "Perfect alignment is not necessary" for the image to be taken. (para 0024). … In EarthCam, an EarthCam operator determines the focus settings and image quality in several stages. An operator selects an onsite target object. A laboratory device and process are used to establish the initial acceptable image focus …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. See reasons for rejection below.
Applicant argues on page 20: “ • Upendran does not suggest or teach the use of a ground truth target object at a property. • Upendran does not suggest or teach identifying a ground truth target object at a property and the ground truth characteristics for the target object. • Upendran does not suggest or teach the use of an image of the target object at a property to assist in focusing the camera. • Upendran does not suggest or teach the use of a target object at a property to create a target object benchmark image to assist in focusing the camera. • Upendran does not suggest or teach the use of an onsite operator and a remote operator to work in concert to focus a camera. • Upendran does not suggest or teach capturing images of a property without using an overlay guide”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 20-22: “APPLICANT CLAIM STEP 1 e. determining pixel characteristics of the onsite target object in the onsite image and stored target object in the stored benchmark image; In Upendran, the entire image of one side of a building is "substantially aligned" … In Shanmugam, the image processing device determines a blur object from one or more objects, based on the pixel characteristics for determined blur values. … EarthCam uses not only the image of a worksite but also a target object within an image of a worksite. It uses the image and the target object separately and together. The pixel characteristics of the onsite target object in the onsite image and stored target object in the stored benchmark image are identified and used to define the An operator accesses a benchmark image. An operator uses an EarthCam proprietary instruction set to identify a target object in an image and the bounding box of a target …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 21: Upendran "Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available)" do not suggest or teach creating authentic evidentiary repeatability of each image to the previous image and also to the next image. • Upendran does not suggest or teach the use of a ground truth target object at a property. • Upendran does not suggest or teach identifying a ground truth target object at a property and the ground truth characteristics for the target object. • Upendran does not suggest or teach the use of an image of the target object at a property to assist in focusing the camera. • Upendran does not suggest or teach the use of a target object at a property to create a target object benchmark image and use the pixel characteristics to assist in focusing the camera. • Shanmugam does not suggest or teach the use of a ground truth target object at a property. • Shanmugam does not suggest or teach identifying a ground truth target object at a property and the ground truth characteristics for the target object. • Shanmugam does not suggest or teach the use of an image of the target object at a property to assist in focusing the camera. • Shanmugam does not suggest or teach the use of a target object at a property to create a target object benchmark image and use the pixel characteristics to assist in focusing the camera. • Shanmugam does not suggest or teach determining a blur object from one or more objects, based on the pixel characteristics of a benchmark image. • Shanmugam does not suggest or teach comparing an image to a benchmark image to determine if the pixel characteristics in the image are adequate.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 20-22: “APPLICANT CLAIM STEP 1 f. using a rubric, the remote operator determining if the pixel characteristics of the onsite target object in the onsite image and the stored target object in the stored benchmark image are similar and adequate; Upendran uses visual indicators … EarthCam compares the pixel characteristics of the onsite target object in the onsite image with the stored benchmark image using a rubric. The comparison is used to ensure the current onsite target object image is the same as the previous onsite target object image and the same as the next onsite target object image. …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 23: “Upendran does not suggest or teach the use of an onsite ground truth target object at a property. • Upendran does not suggest or teach identifying an onsite ground truth target object at a property and the ground truth characteristics for the onsite target object. • Upendran does not suggest or teach the use of an image of the onsite target object at a property to assist in focusing the camera. • Upendran does not suggest or teach the use of a target object benchmark image for a property. • Upendran does not suggest or teach comparing an image to a benchmark image to determine if the pixel characteristics in the image are adequate.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 23-25: “APPLICANT CLAIM STEP 1 g. capturing an additional onsite image, including the onsite target object, using a different focus specification; In Upendran, "if the image does not meet the minimum threshold for quality a first indicator is returned to the computing device (and the image discarded)." … An EarthCam camera takes an image of the same view of a worksite every few minutes but not limited to a few minutes. It captures images of a target object and a worksite continuously, from a fixed position, remotely, automatically every day for several years. During the time the camera is at the worksite, it may take images of the target object and the worksite several hundred thousand times.…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 25: “Upendran does not suggest or teach storing bad images or images which do not meet the minimum threshold for quality. • Upendran does not suggest or teach images are not discarded, destroyed, or changed. • Upendran does not suggest or teach all images, without regard to quality, are stored. • Upendran does not suggest or teach establishing pixel characteristics for a benchmark target object and benchmark worksite image specific to each individual worksite. • Upendran does not suggest or teach remotely adjusting the camera focus, to correct a pixel deviation in a current image from a benchmark image, to ensure 25 authentic evidentiary repeatability of each image to the previous image and to the next image to support worksite safety and regulatory compliance. • Upendran teaches 'a bad image' is an image which is not adequately aligned with an overlay guide. • Upendran does not suggest or teach directly comparing the pixel characteristics of a current target object image and current worksite image from a worksite with benchmark images unique to the same worksite.”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 26-28: “APPLICANT CLAIM STEP 1 h. using one or more rubrics, the remote operator determining if the pixel characteristics of the onsite target object in the additional onsite image and the stored target object in the stored benchmark image are similar and adequate; In Upendran a "picture can be taken manually by the user or automatically taken when substantially aligned with the overlay guide" … In EarthCam the camera is an all weather, heavyweight, high definition, heteronomous camera, affixed to a tall metal structure, including communication systems, backup power systems which the user does not move. The user cannot move around a building to follow an overlay guide with an EarthCam camera.…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 28: “• Upendran does not suggest or teach the use of a rubric to determine if the pixel characteristic of a target object image is adequate. • Upendran does not suggest or teach the use of a benchmark image. • Upendran does not suggest or teach determining image quality without the use of overlay guide, or if an image is aligned to an overlay guide, or having an overlay guide unique to the property. • Upendran does not suggest or teach an all weather, fixed location, heavyweight, industrial, high definition, heteronomous camera, affixed to a tall metal structure, …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 29-31: “APPLICANT CLAIM STEP 1 i. repeating steps g. and h. until the pixel characteristics of the onsite target object in a most recent additional onsite image and the stored target object in the stored benchmark image are similar and adequate, and designating a most recent rubric score as a final rubric score and a most recent additional onsite image as an updated benchmark image, wherein the camera continuously takes images at the construction work site; In Upendran the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images. (para 0021) … In EarthCam an all weather, heavyweight, fixed location, high definition, heteronomous camera, affixed to a tall metal structure, including communication systems, backup power systems which the user does not move, located above, below or at ground level. The user does not walk around the property holding the camera capturing images..…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 28: “ Upendran does not suggest or teach an overlay guide unique to property or to a capture device. • Upendran teaches an image need only to be "substantially aligned (perfect alignment not required) with the overlay guide." (para 0024) • In Upendran, a picture taker must align an image to a corner of an overlay guide frame. The pixel resolution of an image is not a requirement for alignment. Only images which aligned to a overlay guide frame are captured. • Upendran does not suggest or teach repeatability from one image to the next image. • Upendran does not suggest or teach adjusting the focus for a capture device if the device is repaired, replaced or relocated. • Upendran does not suggest or teach establishing benchmark images unique to a property or updating benchmark images unique to a property when a capture device is repaired, replaced or relocated. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 31-32: “APPLICANT CLAIM STEP 1j. updating a record with an identifier for the updated benchmark image; and In Upendran, historical data are similar comparative images to a property. (para 0025) … In EarthCam a target object image benchmark and a worksite image benchmark are unique to a property. Information about a benchmark includes, but not limited to, the pixels high, pixels wide and the number of color channels, lighting, distance from the camera, and camera height, tilt, lens direction, zero-degree marker, orientation, camera identification, worksite location.…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 32: “ • Upendran does not suggest or teach historical data unique to each property. As historical data is not unique to a property there is no need in Upendran to uniquely identify historical data with a unique identification number. • Upendran does not suggest or teach a need to uniquely distinguish the date and time each historical data was created. • Upendran does not suggest or teach historical data captured from one property cannot be used for another property. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 32-34: “APPLICANT CLAIM STEP 1 k. storing the updated benchmark image. In Upendran, historical data are similar comparative images. … In EarthCam all benchmark and all onsite images are stored in a docu-vault unique to a client worksite. The pixel characteristics of a target object benchmark image and a worksite benchmark image are established in the laboratory…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. See reasons for rejection below.
Applicant argues on page 34: “ • Upendran does not suggest or teach historical images are used to determine when a property image is acceptable. • Upendran does not suggest or teach a benchmark image, unique to a worksite and used to determine if an image of a worksite or an image of a target object are acceptable. • Upendran does not suggest or teach updating a benchmark image when the camera is repaired, relocated or replaced. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on pages 34-36: “APPLICANT CLAIM 2. The method of claim 1, wherein a step of retrieving is executed using an address of the stored benchmark image in the docu-vault, the address comprising a numeric chronological feature, and a multi level and hierarchical sequence numbering feature. … In EarthCam, images are specifically identified as benchmark images and distinguished from other images of the worksite. Each Benchmark image is stored on a docu-server unique to a worksite. In each camera focus steps in Process 5: Camera System Focus Process,”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. It would have been obvious to one of ordinary skill in the art to supplement the teachings of the databases and image identifiers in Upedran to use date and time as image identifiers and database addresses which comprise a numeric chronological feature, and a multi-level and hierarchical sequence numbering feature as taught in Samarasekera, in order to be able to locate the images in the databased based on desired identifiers and image properties. See Upendran, Paragraphs 27-29, 50 and Samarasekera, Paragraph 45. See reasons for rejection below.
Applicant argues on page 34: “ • Samarasek does not suggest or teach a benchmark image identified and distinguished from other worksite images. It does not suggest or teach a benchmark image stored separate from other images of a worksite. • Samarasek does not suggest or teach a unique docu-vault for an autonomous and a worksite. The address for a benchmark comprising a numeric chronological feature, and a multi-level and hierarchical sequence …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection below.
Applicant argues on page 35: “APPLICANT CLAIM 4. The method of claim 1, wherein steps c. and d. are executed by the onsite operator. In Upendran the picture taker is onsite and moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images. (para 0021) … In EarthCam, an autonomous camera takes an image of the same view of a worksite every few minutes but not limited to a few minutes without an onsite operator. It captures images of a worksite autonomously and continuously, from a fixed position, remotely, automatically every day for several years.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claims are not limited to the details provided in the arguments and prior art is not limited by Applicant’s own selective reading and explanation. Claiming that steps are executed by an onsite operator does not limit the claims to automation without an onsite operator, and obviousness of manual or automatic operations cited to Upendran, Paragraph 23, 24, and Fig 2. See reasons for rejection below.
Applicant argues on page 36: “ • Upendran does not suggest or teach the picture taker to be remote from the building or automatically taken without an overlay guide. • Upendran does not teach using a benchmark image, unique to a property, to determine if an image of the property is acceptable. • Upendran does not suggest or teach the need for a operator to travel to a worksite to repair, replace or relocate and autonomous operating camera unique to a worksite. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 36: “APPLICANT CLAIM 5. The method of claim 1, wherein the rubric is one or more of metric, digital, or subjective. In Upendran "The picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images." (para 0021 ).… In EarthCam, an autonomous camera takes an image of the same view of a target object and a worksite every few minutes but not limited to a few minutes. …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claim rejection cites Upendran, Paragraph 25 and Fig 2. See reasons for rejection below.
Applicant argues on page 38: “ • Upendran does not suggest or teach taking several hundred thousand images of a single property. • Upendran does not suggest or teach capturing an image without a picture taker aligning each image to an overlay guide. • Upendran does not suggest or teach using an instruction set to automate operating a rubric used to determine image quality for a high volume of images captured by the camera of the worksite. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 38: “APPLICANT CLAIM 6. The method of claim 1, further comprising updating a client request form with a grade based on whether the pixel characteristics of the onsite target object in the onsite image and the pixel characteristics or the stored target object in the stored benchmark image are similar and adequate. In Upendran the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images. (para 0021)… In EarthCam, an autonomous camera takes an image of the same view of a worksite every few minutes but not limited to a few minutes without an onsite operator. It captures images of a worksite autonomously and continuously, from a fixed position, …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claim rejection cites Upendran, Paragraph 25 and Fig 2. See reasons for rejection below.
Applicant argues on page 40: “ • Upendran does not suggest or teach using a rubric to statistically determine the pixel characteristic for a benchmark image. • Upendran does not suggest or teach a user interface form to certify an evidentiary chain of custody for images and benchmark images. …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 40: “APPLICANT CLAIM 8. The method of claim 1, further comprising updating one or more of a client request form with an identification of the updated benchmark image, a camera system log with an identification of the updated benchmark image, the client request form with a focus specification for the updated benchmark image, and a camera system log with a focus specification for the updated benchmark image In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a Camera System Log (Fig 18a-e) and Client Request Form (19ad) are used to monitor and record all significant processes, aspects and information about the camera for a worksite and the image capture for a worksite.…”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Claim rejection cites Upendran, Paragraphs 21, 50. See reasons for rejection below.
Applicant argues on page 41: “ • Upendran does not suggest or teach the use of benchmark images, storing benchmark images or recording information about benchmark images. • Upendran does not suggest or teach recording information about: Client Location Information, Docu-Narrative Request Date, Docu-Narrative Length of Time, Camera System Initial Installation and Relocation Process, Camera System
Mission Operating Software and Process, Mission Instructions Sent to Camera, …”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 42: “APPLICANT CLAIM 9. The method of claim 1, wherein the focus specification comprises a position of a servo motor benchmark zero-degree marker relative to a servo motor 360-degree marker for the camera system. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) … In EarthCam, before a camera is initially installed or replaced at the worksite, a benchmark image is created for the camera in the laboratory. In EarthCam, after a camera is repaired or relocated or capturing a different view at the worksite the camera must be refocused. Before a camera becomes operational again, images are taken at the worksite for use as benchmark images, as part of the focusing process. …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. See reasons for rejection below.
Applicant argues on page 46: “ • Upendran does not suggest or teach using an overlay guide to focus a camera. • Shanmugam does not suggest or teach an autofocus failsafe. • Shanmugam does not suggest or teach using a benchmark image to focus a camera and a zero-degree marker as an autofocus failsafe. • Ishikawa does not suggest or teach a method to overcome the known limitations of phase detection autofocus.…”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 47: “APPLICANT CLAIM 12. The method of claim 1, wherein a step of storing comprises storing the updated benchmark image in a docu-vault and assigning a unique identifier to the updated benchmark image. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a Camera System Log (Fig 18a-e) is used to monitor and record all significant processes, aspects and information about the camera for a worksite. A unique camera system log is used for each camera and for each worksite. …”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. See reasons for rejection below.
Applicant argues on page 48: “ • Upendran does not suggest or teach the use of benchmark images, storing benchmark images or recording information about benchmark images. • Upendran does not suggest or teach recording information about: Client Location Information, Docu-Narrative Request Date, Docu-Narrative Length of Time,…”
Examiner notes that this states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 47: “APPLICANT CLAIM 13. The method of clam 1, wherein the pixel characteristics of a target object are determined based on a bounding box of a target object. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a target object image is an image of a ground truth object on a worksite. A target object image is a smaller image within a worksite image. An operator uses an EarthCam proprietary instruction set to identify a target object within the worksite image. An operator uses an EarthCam proprietary instruction set … • Upendran does not suggest or teach identifying a ground truth target object for a building or determining the pixel dimension of a target object image or the use of a bounding box. • Upendran does not suggest or teach visual indicators as target object image bounding boxes or pixel dimensions for a target object”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 50: “APPLICANT CLAIM 15 STEP The method of claim 1, further comprising, prior to step a.: an operator conducts a laboratory focus setup process, In EarthCam, before a camera is initially installed or replaced at the worksite, a benchmark image is created for the camera in the laboratory. (843) An operator uses an EarthCam Resolution and Focus Device to estimate the size of a ground truth target object and an image in its entirety. (845) An operator selects an object on an EarthCam Resolution … • Upendran does not suggest or teach creating a benchmark image in a laboratory before capturing images of a property. • Upendran does not suggest or teach the use of benchmark images, storing benchmark images or recording information about benchmark images. • Upendran does not suggest or teach recording information about: Client Location Information, Docu-Narrative Request Date, Docu-Narrative Length of Time,”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 51: “APPLICANT CLAIM STEP 15. b. automatically focusing the camera system based on an initial focus specification; In Shanmugam, the disclosed image processing device may control the image capture device to capture a first image (for example, an interim or a preview image). … In EarthCam, before a camera is initially installed at the worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f) A camera focus and pixel resolution are set in a laboratory before the camera is installed at a worksite and becomes operational taking images of the worksite. … • Shanmugam does not suggest or teach setting the focus of the image capture device before an image capture device becomes operational. • Shanmugam does not suggest or teach setting the initial focus of the image capture device in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 54: “APPLICANT CLAIM STEP 15 automatically capturing an automatic image of a resolution and focus device; In Upendran "The picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images." (para 0021) … In EarthCam, before a camera is initially installed at the worksite. Before a camera takes images at a worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f) A camera focus and pixel resolution are set in a laboratory before the camera is installed at a worksite and becomes operational taking images of the worksite. … • Upendran does not suggest or teach taking a picture without using an overlay guide. • Upendran does not suggest or teach taking a picture before the picture taker arrives at the building.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 56: “APPLICANT CLAIM STEP 15 selecting a selected target object in the automatic image; "In Shanmugam, the disclosed image processing device may control the image capture device to capture a first image (for example, an interim or a preview image). The captured preview image may correspond to a scene that may include one or more objects, such as humans, animals, plants, and other non-living entities .… In EarthCam, before a camera is initially installed at the worksite. Before a camera takes images at a worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f) … • Shanmugam does not suggest or teach setting the initial image focus in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device. • Shanmugam does not suggest or teach before using a camera at a scene to travel to the scene to identify a target object at the scene and take an image of the scene and the target object with a different camera.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 59: “APPLICANT CLAIM STEP 15 manually focusing the camera; In Shanmugam, "the disclosed image processing device may control the image capture device to capture a first image (for example, an interim or a preview image). … (843) An operator uses an EarthCam Resolution and Focus Device to estimate the size of a ground truth target object from the worksite and a worksite image in its entirety. (845) An operator selects an object on an EarthCam Resolution and Focus Device to use as the target object in the image. The EarthCam Resolution and Focus Device simulates the ground truth size of a worksite target object • Shanmugam does not suggest or teach setting the focus of the image capture device before an image capture device becomes operational. • Shanmugam does not suggest or teach setting the initial image focus in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 61: “APPLICANT CLAIM STEP 15 manually capturing a manual image; In Upendran "The picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images." (para 0021) … (In EarthCam, before a camera is initially installed at the worksite. Before a camera takes images at a worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f) … • Upendran does not suggest or teach taking a picture without using an overlay guide. • Upendran does not suggest or teach taking a picture before the picture taker arrives at the building. • Upendran does not suggest or teach accurately setting the initial focus of an image capture device to support worksite safety and regulatory compliance.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 63: “APPLICANT CLAIM STEP 15 identifying the selected target object in the manual image; In Shanmugam, the disclosed image processing device may control the image capture device to capture a first image (for example, an interim or a preview image). The captured preview image may correspond to a scene that may include one or more objects, such as humans, animals, plants, and other non-living entities.… (In EarthCam, before a camera is initially installed at the worksite. Before a camera takes images at a worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f • Shanmugam does not suggest or teach setting the initial image focus in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device. • Shanmugam does not suggest or teach before using a different camera at a scene, to travel to the scene to identify a target object at the scene, take an image of the scene and the target object within the scene using a different camera from the camera in the laboratory. • Shanmugam does not suggest or teach before using a camera at a scene to get the ground truth dimensions of the target object at the scene.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 66: “APPLICANT CLAIM STEP 15 identifying the automatic image or the manual image as a camera benchmark image; and In Shanmugam, the disclosed image processing device may control the image
capture device to capture a first image (for example, an interim or a preview image). … In EarthCam, before a camera is initially installed at the worksite, a benchmark image is created for the camera in the laboratory. (Fig 3a-f) … • Shanmugam does not suggest or teach setting creating a benchmark image for an image capture device becomes the image capture device is operational or is used at a scene. • Shanmugam does not suggest or teach setting the initial focus of the image capture device in a laboratory and simulating the ground truth size of a target object from worksite with an EarthCam Resolution and Focus Device.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 68: “APPLICANT CLAIM STEP 15 assigning a unique identifier to the camera benchmark image. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a Camera System Log (Fig 18a-e) is used to monitor and record all significant processes, aspects and information about the camera for a worksite. A unique camera system log is used for each camera and for each worksite. … • Upendran does not suggest or teach taking a picture before the picture taker arrives at the building. • Upendran does not suggest or teach accurately setting the initial focus of an image capture device to support worksite safety and regulatory compliance. • Upendran does not suggest or teach the use of benchmark images, storing benchmark images or recording information about benchmark images.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 71: “APPLICANT CLAIM 21. The method of claim 15, further comprising storing the camera benchmark image in a docu-vault. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a Camera System Log (Fig 18a-e) and Client Request Form (Fig 19a-d) are used to monitor and record all significant processes, aspects and information about the camera for a worksite. A unique Camera System Log and Client Request Form are used for each camera and for each worksite.… • Upendran does not suggest or teach taking a picture without using an overlay guide. • Upendran does not suggest or teach taking a picture before the picture taker arrives at the building. • Upendran does not suggest or teach accurately setting the initial focus of an image capture device to support worksite safety and regulatory compliance.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 73: “APPLICANT CLAIM 22. The method of claim 15, further comprising updating one or both of a client request form and a camera system log to indicate that the pixel characteristics of the target object in the automatic image and the target object in the manual image are adequate. In Upendran, ground-level images are captured as the picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images from multiple angles and distances.… In EarthCam, a Camera System Log (Fig 18a-e) and Client Request Form (Fig 19a-d) are used to monitor and record all significant processes, aspects and information about the camera for a worksite.… • Upendran does not suggest or teach taking a picture without using an overlay guide. • Upendran does not suggest or teach taking a picture before the picture taker arrives at the building. • Upendran does not suggest or teach accurately setting the initial focus of an image capture device to support worksite safety and regulatory compliance.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 74: “APPLICANT CLAIM 23. A method for changing a camera system focus, the camera system for recording images, the method comprising: … In Upendran "The picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images." (para 0021) … In EarthCam, after a camera is repaired or relocated or capturing a different view at the worksite the camera must be refocused. Before a camera becomes operational again, images are taken at the worksite for use as benchmark images, as part of the focusing process. An onsite operator and a remote operator work together to focus the camera. An onsite operator will manually create a new target object image and a new worksite image. The images will be sent to a remote operator. … • Upendran does not suggest or teach refocusing a camera, by using a new benchmark image after a camera has been repaired, relocated, or had the camera view changed at the worksite and before taking more images of a property. • Shanmugam does not suggest or teach refocusing a camera, by using a new benchmark image after a camera has been repaired, relocated, or had the camera view changed at the worksite and before taking more images of a property.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Applicant argues on page 78: “APPLICANT CLAIM 29. A method for refocusing a camera system for capturing images, after the camera system has been repaired or an element of the camera system replaced, the method comprising: … In Upendran "The picture taker moves around the building-taking a plurality (e.g., 4-16 for an entire building) of ground level images." (para 0021) In EarthCam, after a camera is repaired the camera must be refocused. Before a camera becomes operational again, images are taken at the worksite for use as benchmark images, as part of the focusing process. An onsite operator and a remote operator work together to focus the camera.… • Upendran does not suggest or teach refocusing a camera, by using a new benchmark image after a camera has been repaired, and before taking more images of a property. • Shanmugam does not suggest or teach refocusing a camera, by using a new benchmark image after a camera has been repaired, and before taking more images of a property.”
Examiner again notes that Applicant states an opinion of what the invention is intended to be but does not address the specific claim language and the specific reasons for rejection of that claim language. Further, the argument states a conclusion but does not present an argument to address the specific reasons for rejection cited for the specific claim elements. See reasons for rejection based on obviousness below.
Specification
The substitute specification filed on 09/30/2025 has not been entered because it does not conform to 37 CFR 1.125(b) and (c) because: Applicant changes the language of Specification from “The Client Request Form includes a requirement to change the focus for camera system (100). For example, for operation different client location .
The drawings were received on 09/30/2025. These drawings are not entered because they present substantive detail that was not present in the drawings that were originally filed. Applicant should provide support for the substitute drawings based on the drawings in the original or priority applications, file a CIP with the corrected drawings, and/or remove the defective drawings from the present specification.
Drawings
Figs. 2H, 2I, 2J, 2K, 2M are objected to. The drawings are objected to under 37 CFR 1.83(a) because they show only labels and fail to show structural detail. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 1, 15, 16, 17, 28, are objected to because of the following informalities:
The disclosure is objected to under 37 CFR 41.106(a) because of the following informalities: The [Specification, Claims, Figure] pages are in poor quality which renders them illegible to document processing. In papers, including affidavits, created for the proceeding: (i) Markings must be in black ink or must otherwise provide an equivalently permanent, dark, high-contrast image on the paper. The quality of printing must be equivalent to the quality produced by a laser printer. Either a proportional or monospaced font may be used, but the proportional font must be 12-point or larger and a monospaced font must not contain more than 4 characters per centimeter (10 characters per inch). Case names must be underlined or italicized.
Applicant is reminded to submit its actions in clear dark print; future submissions of documents using colored text, such as tracked changes, may be denied particularly where such submission is rendered illegible by the document processing.
PNG
media_image1.png
200
400
media_image1.png
Greyscale
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
“When a patent claims a structure already known in the prior art that is altered by the mere substitution of one element for another known in the field, the combination must do more than yield a predictable result.” KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 415, 82 USPQ2d 1385 (2007).
Broadly providing an automatic or mechanical means to replace a manual activity which accomplished the same result is not sufficient to distinguish over the prior art. In re Venner, 262 F.2d 91, 95, 120 USPQ 193, 194 (CCPA 1958); M.P.E.P 2144.04(III); FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016). A claimed improvement by use of a computer requires a nexus to a particularly claimed algorithm and may not come “solely from the capabilities of a general-purpose computer.” FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016).
Claims 1, 3-6, 8, 12-17, 21-24, 26, 29 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180225869 Upendran (“Upendran”) in view of US 20200145583 to Shanmugam (“Shanmugam”) in view of US 20080169922 to Issokson (“Issokson”).
Regarding Claim 1: “A method for setting up a camera system for recording images, the method comprising:
a. a remote operator retrieving a stored benchmark image from a docu-vault, the stored benchmark image including a stored target object; (“The series of captured ground level images will be uploaded to both image database 104 [docu-vaults] and image processing servers 102 [remote operators] … Building images [including a stored target object] meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database is an example embodiment of the “docu-vault.”. Upendran, Paragraphs 21, 50. Further note that the “Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison)” which indicates that the quality determination can be performed remotely on a server (automated operator) based on retrieval of similar historical images. See Upendran, Paragraph 25 and Fig 2.)
b. an onsite operator identifying an onsite target object at the client [construction work] site and communicating a description of the onsite target object to the remote operator, (“guiding a user of a capture device 108 ( e.g., smartphone) to more accurately capture a series of ground level images of a building [identified target object]. … The series of captured ground level images [descriptions of onsite target object] will be uploaded to both image database 104 and image processing servers [remote operators] 102 to be stored and processed” Upendran, Paragraph 21. See additional descriptions and information passes between the camera and the servers in Upendran, Paragraphs 23-24 and Fig 2. See treatment of uses at a construction work site below.)
“c. executing a focus operation based on a predetermined camera system focus specification;” (“The graphical overlay guides are not used to assist in focusing the camera,” thus the focus settings are predetermined by the camera system and not set by the user. Upendran, Paragraph 23.
Cumulatively note that Upendran provides for “a predetermined camera focus specification” that is inherent to the camera taking a picture, a manual or an autofocus setting of the camera, but does not determine a numerical focus specification in capturing the image.
Shanmugam teaches this feature in the context of focus and autofocus functionality in cameras: “For each captured image the system will compute a score for sharpness (focus),” Shanmugam, Paragraph 47 and example computations in Paragraph 32. “the disclosed image processing device 102 further offers an automatic adjustment of one or more camera parameters (for example a focal point/autofocus (AF) points,” Shanmugam, Paragraph 26. See statement of motivation below.)
d. capturing an onsite image, including the onsite target object, using the predetermined focus specification; (“The picture can be taken manually by the user or automatically taken when substantially aligned with the overlay guide” Upendran, Paragraph 24 and Fig 2. See additional treatment of focus specification below.)
e. determining pixel characteristics of the onsite target object in the onsite image and stored target object in the stored benchmark image; (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” where similar historical image stores a similar target object. Upendran, Paragraph 25 and Fig 2.
Upendran does not explicitly teach that image quality is determined based on pixel characteristics, however it is well-understood in the art that digital image quality is evaluated based on properties of the pixels in a particular combination.
Shanmugam provides specific examples of calculating a quality metric based on pixels: “Examples of the blur [focus] estimation techniques may include, but are not limited to, a Laplacian of Gaussian (LoG) filter or a Laplacian of pixel values, a Fast Fourier Transform (FFT)-based technique, a Sobel-Tenengrad operator (sum of high-contrast pixels),” Shanmugam, Paragraph 32.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Upendran to calculate a quality metric (blur / focus) based on pixel characteristics as taught in Shanmugam, in order to determine particular properties of the image that are desired to be improved, such as blur / focus. Shanmugam, Paragraph 32.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.)
f. using a rubric, the remote operator determining if the pixel characteristics of the onsite target object in the onsite image and the stored target object in the stored benchmark image are similar and adequate; (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus determining good / adequate quality based on comparison to historical images. Upendran, Paragraph 25 and Fig 2.)
g. capturing an additional onsite image, including the onsite target object, using a different focus specification; (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available. … low visual image quality ( e.g., poor lighting, poor resolution, blurred [bad focus], include obstructions, out of frame, etc.). … the user may receive real-time feedback reflective of the quality and/or actions to take in response to a specific quality indication.” Upendran, Paragraphs 25, 46-47, and Fig 2. Thus, when the image has bad focus quality indication, the user will be prompted to retake the image in order to have images with a different focus (i.e. a different focus specification), until this quality is good enough.)
h. using one or more rubrics, the remote operator determining if the pixel characteristics of the onsite target object in the additional onsite image and the stored target object in the stored benchmark image are similar and adequate; (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus determining good / adequate quality based on comparison to historical images. Upendran, Paragraph 25 and Fig 2.)
i. repeating steps g. and h. until the pixel characteristics of the onsite target object in a most recent additional onsite image and the stored target object in the stored benchmark image are similar and adequate, and designating a most recent rubric score as a final rubric score and a most recent additional onsite image as an updated benchmark image; (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available,” until the captured image is good enough. Upendran, Paragraph 25 and Fig. 2 steps 208 and 216.)
wherein the camera continuously takes images at the [construction work] site; (“For example, the capture devices include, but are not limited to: a camera, a phone, a smartphone, a tablet, a video camera,” which takes a continuous series of images comprising a video. See Upendran, Paragraphs 18, 56. See treatment of use at the construction work site below.)
j. updating a record with an identifier for the updated benchmark image; and (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraph 25.)
k. storing the updated benchmark image.” (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraphs 25. Also note an embodiment where “In step 1012, each of building images meeting the minimum quality threshold is aggregated ( e.g., stored as a designated building file in computer memory).” Upendran, Paragraphs 52.)
[method for setting up a camera system for recording images … an onsite operator identifying an onsite target object] at the client construction work site … wherein the camera is installed at a fixed construction work site; … [wherein the camera continuously takes images] at the construction work site; (First note that a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. According to the Specification the same claimed “method for setting up a camera system for recording images” can be used “Supertall building and vast construction sites need a camera … The Camera System operates in a hostile environment like, but not limited to, construction sites, tall building construction and civil engineering projects.” Specification, Page 2, lines 16-17 and Page 6, lines 12-15. Thus, the intended location where the camera can be used or installed does not result in a structural difference in the claimed invention, and this element is rejected based on the rejections of the other claim elements.
A cumulative reason for rejection is: If the prior art structure is capable of performing the intended use, then it meets the claim. Here, Upendran teaches: “Capture device(s) 108 is in communication with image processing servers 102 for collecting images of building objects,” and thus the camera setup in the prior art is capable of operating and capturing images of building objects when the building is under construction.
Similarly Issokson teaches that “the camera 46 can be positioned anywhere in the construction site [specific/fixed construction site] and can be mounted on a stand (not shown) with a mechanism to rotate the camera 46 into different viewing angles and areas” Issokson, Paragraph 27.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings to install and use the camera at a fixed construction site as taught in Issokson and enabled in Upendran, in order to apply the method to monitor a construction site. See Issokson, Paragraph 27.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Regarding Claim 3: “The method of claim 1, wherein steps e. and f. are executed by the remote operator.” (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102)” Upendran, Paragraph 25.)
Regarding Claim 4: “The method of claim 1, wherein steps c. and d. are executed by the onsite operator.” (“The picture can be taken manually by the user or automatically taken when substantially aligned with the overlay guide” using the focus settings predetermined by the camera device. Upendran, Paragraph 23, 24, and Fig 2.)
Regarding Claim 5: “The method of claim 1, wherein the rubric is one or more of metric, digital, or subjective.” (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus employing digital metrics of subjective quality (Good, Bad, Best). Upendran, Paragraph 25 and Fig 2.)
Regarding Claim 6: “The method of claim 1, further comprising updating a client request form with a grade based on whether the pixel characteristics of the onsite target object in the onsite image and the pixel characteristics or the stored target object in the stored benchmark image are similar and adequate.” (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus employing digital metrics of subjective grade of quality (Good, Bad, Best). Upendran, Paragraph 25 and Fig 2. In this case, the client request form is embodied in a user interface form for receiving feedback from the process.)
Regarding Claim 8: “The method of claim 1, further comprising updating one or more of … a client request form with an identification of the updated benchmark image, … a camera system log with an identification of the updated benchmark image, … the client request form with a focus specification for the updated benchmark image, and … a camera system log with a focus specification for the updated benchmark image.” (“The series of captured ground level images will be uploaded to both image database 104 and image processing servers 102 … Building images meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database stores the camera system log and is updated with new benchmark images and their identification. Upendran, Paragraphs 21, 50.)
Regarding Claim 12: “The method of claim 1, wherein a step of storing comprises storing the updated benchmark image in a docu-vault and assigning a unique identifier to the updated benchmark mage.” (“The series of captured ground level images will be uploaded to both image database 104 and image processing servers 102 … Building images meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database stores the camera system log and is updated with new benchmark images and their identification information. Upendran, Paragraphs 21, 50.)
Regarding Claim 13: “The method of clam 1, wherein the pixel characteristics of a target object are determined based on a bounding box of a target object.” (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available.” Upendran, Paragraph 25.)
Regarding Claim 14: “The method of clam 1, further comprising if an image includes a distorted or obscured object, a remote operator requesting that an onsite operator capture a ground truth image of the distorted or obscured object for use in correcting or replacing the distorted or obscured object in the image, thereby creating a replacement image and adding the replacement image to the docu-narrative.” (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off) [the object is distorted or obscured], the user is prompted [by the local or remote image processing system] to either retake the image or select best available.” Upendran, Paragraph 25.)
Regarding Claim 15: “The method of claim 1, further comprising, prior to step a.:
an operator conducts a laboratory focus setup process, (“The picture can be taken manually by the user or automatically” providing two examples of operators. Upendran, Paragraph 24.)
automatically focusing the camera system based on an initial focus specification; (“The disclosed image processing device may control the image capture device [having a resolution and a focus setting] to capture a first image (for example, an interim or a preview image). … The image processing device 102 may be further configured to adjust a focal point (e.g., an autofocus (AF) point/Focus points) of the image capture device 106 to focus on the first object 110 that may be determined as the blur object.” Shanmugam, Paragraphs 13, 23. See statement of motivation in Claim 1.)
automatically capturing an automatic image of a resolution and focus device; (“The picture can be taken manually by the user or automatically” Upendran, Paragraph 24. For example, “The disclosed image processing device may control the image capture device [having a resolution and focus] to capture a first image (for example, an interim or a preview image).” Shanmugam, Paragraph 13. See statement of motivation in Claim 1.)
selecting a selected target object in the automatic image; (“one or more objects (i.e. sharper objects in image) initially identified in the first Image … user to dynamically select or deselect different type of objects that may be identified” Shanmugam, Paragraph 13. See statement of motivation in Claim 1.)
manually focusing the camera; (“focus settings may have to be manually adjusted to recalibrate camera to capture an image of a scene with desired objects in focus.” Shanmugam, Paragraph 3 and automation in Paragraph 13. See treatment of automation above and statement of motivation in Claim 1. “The image processing device 102 may be further configured to control the image capture device 106 to capture a second image of the scene 108 based on the adjusted focal point of the image capture device.” Shanmugam, Paragraph 23. See statement of motivation in Claim 1.)
manually capturing a manual image; (“The picture can be taken manually by the user or automatically” Upendran, Paragraph 24. See statement of motivation in Claim 1.)
identifying the selected target object in the manual image; (“one or more objects (i.e. sharper objects in image) initially identified in the first Image … user to dynamically select or deselect different type of objects that may be identified” Shanmugam, Paragraph 13. See statement of motivation in Claim 1.)
determining whether pixel characteristics of the selected target object in the automatic image and pixel characteristics of the selected target object in the manual image are similar and adequate based on a predetermined rubric, rules or an algorithm; (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, two pictures taken at different times and settings will never be identical, and Specification does not use the term in this manner, but they may be sufficiently similar in the testing criteria as noted in Specification, Paragraphs 895, 971. Prior art teaches this manner of determination in comparison to previously acquired images: “Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus determining good / adequate quality based on comparison to historical images. Upendran, Paragraph 25 and Fig 2. Also see blur / focus specific comparison criteria in Shanmugam, Paragraphs 22, 24, and statement of motivation in Claim 1.)
if the pixel characteristics of the target object in the automatic image and the pixel characteristics of the target object in the manual image are similar and adequate, determining parameters of the focus elements; (“Therefore, image quality includes, but is not limited to, predetermined acceptable camera-based parameters” Upendran, Paragraph 47. See blur / focus quality comparison criteria in Shanmugam, Paragraphs 22, 24, and statement of motivation in Claim 1.)
identifying the automatic image or the manual image as a camera benchmark image; and (“Thus, the image processing device generates a user-desired well focused image … The image processing device 102 may be further configured to replace the first object 110 (i.e. with first blur value) in the first image with the first object 110 (i.e. with second blur value) in the second image,” Shanmugam, Paragraphs 22, 23 and statement of motivation in Claim 1.)
assigning a unique identifier to the camera benchmark image.” (“operating system software with its associated file management system software … to store data in the memory, including storing files … each of building images meeting the minimum quality threshold is aggregated ( e.g., stored as a designated building file [unique id] in computer memory).” Upendran, Paragraphs 45, 52.)
Regarding Claim 16: “The method of claim 15, wherein if the pixel characteristics of the selected target object in the automatic image and pixel characteristics of the selected target object in the manual image are not similar and adequate based on the predetermined rubric, rules or an algorithm, repeating steps of manually focusing, manually capturing, and identifying the selected target object, until a result of a step of determining is affirmative.” (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available,” until the captured image is good enough. Upendran, Paragraph 25 and Fig. 2 steps 208 and 216.San See using focus as a measure of quality in Claim 15.)
Regarding Claim 17: “The method of claim 15, wherein the predetermined rubric comprises one or more of metric, digital, and subjective characteristics that permit determining whether pixel characteristics of the selected target object in the automatic image and pixel characteristics of the selected target object in the manual image are similar and adequate.” .” (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” thus employing digital metrics of subjective quality (Good, Bad, Best). Upendran, Paragraph 25 and Fig 2.)
Regarding Claim 21: “The method of claim 15, further comprising storing the camera benchmark image in a docu-vault.” (“Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison)” which indicates that the quality determination can be performed remotely on a server (automated operator) based on retrieval of similar historical images. See Upendran, Paragraph 25 and Fig 2.)
Regarding Claim 22: “The method of claim 15, further comprising updating one or both of a client request form and a camera system log to indicate that the pixel characteristics of the target object in the automatic image and the target object in the manual image are adequate.” (“The series of captured ground level images will be uploaded to both image database 104 and image processing servers 102 … Building images meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database stores the camera system log and is updated with new benchmark images and their identification. Upendran, Paragraphs 21, 50.)
Regarding Claim 23: “A method for changing a camera system focus, the camera system for recording images, the method comprising:
a. receiving a client request to change a camera system focus to acquire images at a different distance from the camera system; (“the user may receive real-time feedback reflective of the quality and/or actions to take in response to a specific quality indication.” Here, quality can be indicated in terms of blur (focus) and distance. Upendran, Paragraphs 24-25. See details of adjusting focus in Shanmugam, Paragraphs 13, 23. See statement of motivation in Claim 1.)
b. retrieving a benchmark image from the docu-vault; (“Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison)” which indicates that the quality determination can be performed remotely on a server (automated operator) based on retrieval of similar historical images. See Upendran, Paragraph 25 and Fig 2.)
c. an onsite operator identifying a target object at the different distance; (“In step 204, a first overlay from a set of sequential graphical overlay guides is retrieved for display on capture device 108 [onsite device]. … the system receives a selection of which overlay guide best matches the present perspective of the capture device (manually from the user or automatically [onsite operator] based on location/perspective/other image processing … a user is prompted to start with a specific overlay … In addition, client-side feedback can assist the user, for example, visual indicators such as text prompts …” which helps the operator take the picture of the same object with better settings. Upendran, Paragraphs 23-24 and Fig 2.)
“d. the onsite operator capturing an onsite image of the target object using focus parameters” (“The picture can be taken manually by the user or automatically taken when substantially aligned with the overlay guide” Upendran, Paragraph 24 and Fig 2. See additional treatment of focus specification below. “The graphical overlay guides are not used to assist in focusing the camera,” thus the focus settings are predetermined by the camera system and not set by the user using “a predetermined camera focus specification” that is inherent to the camera taking a picture, a manual or an autofocus setting of the camera. Upendran, Paragraph 23.
Cumulatively note that Shanmugam teaches to determine a numerical focus specification in capturing the image in the context of focus and autofocus functionality in cameras: “For each captured image the system will compute a score for sharpness (focus),” Shanmugam, Paragraph 47 and example computations in Paragraph 32. “the disclosed image processing device 102 further offers an automatic adjustment of one or more camera parameters (for example a focal point/autofocus (AF) points,” Shanmugam, Paragraph 26. See statement of motivation in Claim 1.)
e. using a rubric, a remote operator determining if pixel characteristics of the target object in the onsite image and a target object in the benchmark image are similar and adequate; (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” where similar historical image stores a similar target object. Upendran, Paragraph 25 and Fig 2.
Upendran does not explicitly teach that image quality is determined based on pixel characteristics, however it is well-understood in the art that digital image quality is evaluated based on properties of the pixels in a particular combination.
Shanmugam provides specific examples of calculating a quality metric based on pixels: “Examples of the blur [focus] estimation techniques may include, but are not limited to, a Laplacian of Gaussian (LoG) filter or a Laplacian of pixel values, a Fast Fourier Transform (FFT)-based technique, a Sobel-Tenengrad operator (sum of high-contrast pixels),” Shanmugam, Paragraph 32. See statement of motivation in Claim 1.
f. repeating steps d. and e. using different focus parameters until the pixel characteristics of the target object in a most recent onsite image and the target object in the benchmark image are similar and adequate according to a rubric score, then designating a most recent rubric score as a final rubric score, a most recent onsite image as an updated benchmark image, and a most recent focus parameters as final focus parameters; (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available,” until the captured image is good enough. Upendran, Paragraph 25 and Fig. 2 steps 208 and 216.)
g. updating a record with one or more of the final rubric score, the updated benchmark image, and the final focus parameters; and (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraph 25.)
h. storing the updated benchmark image.” (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraphs 25. Also note an embodiment where “In step 1012, each of building images meeting the minimum quality threshold is aggregated ( e.g., stored as a designated building file in computer memory).” Upendran, Paragraphs 52.)
Regarding Claim 24” “The method of claim 23, further comprising assigning a unique identifier to the updated benchmark image.” (“The series of captured ground level images will be uploaded to both image database 104 and image processing servers 102 … Building images meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database stores the camera system log and is updated with new benchmark images and their various identifications (name, location, property, etc.) in the database. Upendran, Paragraphs 21, 50.)
Regarding Claim 26: “The method of claim 23, wherein a step of storing comprises storing the updated benchmark image in the docu-vault.” (“The series of captured ground level images will be uploaded to both image database 104 and image processing servers 102 … Building images meeting the minimum threshold for quality are identified using a database of previously stored images 1008,” in this case the database is an example embodiment of the “docu-vault.”. Upendran, Paragraphs 21, 50.)
Regarding Claim 29: “A method for refocusing a camera system for capturing images, after the camera system has been repaired or an element of the camera system replaced, the method comprising:
a. retrieving a stored benchmark image; (“Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison)” which indicates that the quality determination can be performed remotely on a server (automated operator) based on retrieval of similar historical images. See Upendran, Paragraph 25 and Fig 2.)
b. retrieving an onsite field image using focus parameters; (“The picture can be taken [retrieved] manually by the user or automatically taken when substantially aligned with the overlay guide … In addition, client-side feedback can assist the user” based on the retrieved image. Upendran, Paragraph 24 and Fig 2. See additional treatment of focus specification below. “The graphical overlay guides are not used to assist in focusing the camera,” thus the focus settings are predetermined by the camera system and not set by the user using “a predetermined camera focus specification” that is inherent to the camera taking a picture, a manual or an autofocus setting of the camera. Upendran, Paragraph 23.
Cumulatively note that Shanmugam teaches to determine a numerical focus specification in capturing the image in the context of focus and autofocus functionality in cameras: “For each captured image the system will compute a score for sharpness (focus),” Shanmugam, Paragraph 47 and example computations in Paragraph 32. “the disclosed image processing device 102 further offers an automatic adjustment of one or more camera parameters (for example a focal point/autofocus (AF) points,” Shanmugam, Paragraph 26. See statement of motivation in Claim 1.)
c. determining if pixel characteristics of a target object in the onsite field image and a target object in the stored benchmark image are similar and adequate; (“Visual indicators of image quality are included 216 to determine image quality (Good, Bad or Best Available). Image quality can be determined in real time (e.g., milliseconds) either using onboard software or remotely using server processing (102) and may include historical data (similar image comparison) for refinement or learning capability,” where similar historical image stores a similar target object. Upendran, Paragraph 25 and Fig 2.
Upendran does not explicitly teach that image quality is determined based on pixel characteristics, however it is well-understood in the art that digital image quality is evaluated based on properties of the pixels in a particular combination.
Shanmugam provides specific examples of calculating a quality metric based on pixels: “Examples of the blur [focus] estimation techniques may include, but are not limited to, a Laplacian of Gaussian (LoG) filter or a Laplacian of pixel values, a Fast Fourier Transform (FFT)-based technique, a Sobel-Tenengrad operator (sum of high-contrast pixels),” Shanmugam, Paragraph 32. See statement of motivation in Claim 1.
d. repeating steps b. and c. using different focus parameters until the pixel characteristics of the target object in a most recent onsite filed image and the target object in the stored benchmark image are similar and adequate according to a rubric score, then designating a most recent rubric score as a final rubric score, a most recent onsite filed image as an updated stored benchmark image, and a most recent focus parameters as final focus parameters; (“If the image capture is bad ( e.g., left/right/top/bottom boundaries cut off), the user is prompted to either retake the image or select best available,” until the captured image is good enough. Upendran, Paragraph 25 and Fig. 2 steps 208 and 216.)
e. updating a record with one or more of the final rubric score, the updated benchmark image, and the final focus parameters; and (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraph 25.)
f. storing the updated stored benchmark image.” (As noted the image quality processing “may include historical data (similar image comparison) for refinement or learning capability,” which indicates that images are retained and become historical data for future comparison and process refinement. See Upendran, Paragraphs 25. Also note an embodiment where “In step 1012, each of building images meeting the minimum quality threshold is aggregated ( e.g., stored as a designated building file in computer memory).” Upendran, Paragraphs 52.)
Claims 2, 25, are rejected under 35 U.S.C. 103 as being unpatentable over US 20180225869 Upendran (“Upendran”) in view of US 20200145583 to Shanmugam (“Shanmugam”) in view of US 20230419410 to Samarasekera (“Samarasekera”).
Regarding Claim 2: “The method of claim 1, wherein a step of retrieving is executed using an address of the stored benchmark image in the docu-vault, the address comprising a numeric chronological feature, and a multi-level and hierarchical sequence numbering feature.” (Under the broadest reasonable interpretation consistent with the specification and ordinary skill in the art, this element is directed to a database (docu-vault) address / identifier by which the image can be located and retrieved.
Upendran and Shanmugam do not teach using a database identifier comprising a numeric chronological feature, and a multi-level and hierarchical sequence numbering feature. Upendran teaches storing images in a database with identifiers of location and quality but does not explicitly teach using hierarchical chronological identifiers. See Upendran, Paragraphs 27-29, 50. However, images are commonly ordered in a database by date and time of capture:
Samarasekera teaches the above claim embodiment in the context of collecting a storing images in a database: “At 312, the images that are collected, along with location and camera information, will be sent to the image evaluator 142 on the centralized server 130, … The location and camera information will include geo-graphic location, heading, pitch and tilt of the camera and other information of collection time (time, day, …” where time and day indicate numeric chronological features having a multi-level time-based hierarchy which serve as identifiers or addresses of the images in the database. See Samarasekera, Paragraph 45. See statement of motivation in Claim 1.)
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of the databases and image identifiers in Upedran to use date and time as image identifiers and database addresses which comprise a numeric chronological feature, and a multi-level and hierarchical sequence numbering feature as taught in Samarasekera, in order to be able to locate the images in the databased based on desired identifiers and image properties. See Upendran, Paragraphs 27-29, 50 and Samarasekera, Paragraph 45.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Claim 25 is rejected for reasons stated for Claim 2 in view of the Claim 24 rejection.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180225869 Upendran (“Upendran”) in view of US 20200145583 to Shanmugam (“Shanmugam”) in view of US 20080169922 to Issokson (“Issokson”) in view of US 20220270405 to Maruyama (“Maruyama”).
Regarding Claim 7: “The method of claim 1, further comprising
Upendran and Shanmugam do not teach “the onsite operator notifying the remote operator of a date and a time of arrival of the camera system at a client site prior to executing step a.”
Maruyama teaches this feature in the context of camera imaging and image history creation: “From this setting screen, setting values of the time period (such as a start date and time and an end date and time of the time period)” Maruyama, Paragraph 136. This information along with other media and report information can be processed or made available at a server, corresponding to the remote operator in the claims above. See Maruyama, Paragraphs 92, 156.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Upendran and Shanmugam to “notifying the remote operator of a date and a time of arrival of the camera system at a client site” as taught in Maruyama, in order to report or record information at the remote cite for further processing or use. See Maruyama, Paragraph 136.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness. Marking a date and time on a digital communication, log, or report is ordinarily performed in the art of imaging and media transmission.
Claims 9-11, 18-20, 27-28 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180225869 Upendran (“Upendran”) in view of US 20200145583 to Shanmugam (“Shanmugam”) US 20080169922 to Issokson (“Issokson”) in view of US 20050276590 to Ishikawa (“Ishikawa”) and in view of US 20110043936 to Mori (“Mori”).
Regarding Claim 9: “The method of claim 1, wherein the focus specification comprises a position of a servo motor benchmark zero-degree marker relative to a servo motor 360-degree marker for the camera system.” (“The second lens unit (focusing lens) 12 receives a drive force from an Auto-Focus (AF) motor 16, and can move along the optical axis AXL to perform focus adjustment. … A focus driver 213 drives the AF motor 16 in accordance with the information regarding the amount and direction by which the second lens unit 12 is to be driven, the information being transmitted from the camera CPU 101.” Ishikawa, Paragraphs 34, 62, 77 and Fig. 1. As shown in Fig. 1, the rotation of the motor 16 from 0 to 360 degrees would change the position of the second lens unit 12 by a known (i.e. benchmark) focus amount and the CPU 101 determines how much motor time / rotation is required to achieve a particular focus.)
Ishikawa does not explicitly teach that there is a “benchmark zero-degree marker relative … 360-degree marker,” however it is understood that that the AF system knows the position of the lens and the corresponding position and degrees of rotation of the motor in order to control the focus setting by the CPU.
Cumulatively, Mori explicitly describes this function in a camera: “The lens device has a first MF mode that outputs an absolute position signal corresponding to the rotational position of a focus ring as an instruction to move the focus lens and a second MF mode that outputs a relative position signal corresponding to the amount of rotation of the focus ring as an instruction to move the focus lens,” Mori, Paragraph 5 and 72.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Upendran, Shanmugam, Ishikawa to track the positions of the lens and the focus gears and motor such that it knows when the motor is at “zero-degree marker … 360-degree marker,” as taught in Mori, in order to perform manual and autofocus in the camera. See Mori, Paragraph 5 and 72.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Regarding Claim 10: “The method of claim 1, wherein the focus specification relates to a number of degrees for a lens sleeve benchmark zero-degree marker relative to a camera body 360-degree marker.” (“A focus driver 213 drives the AF motor 16 in accordance with the information regarding the amount [time and angle] and direction by which the second lens unit 12 is to be driven,” Ishikawa, Paragraph 62. Similarly note that “when the first focus ring 70 is operated in the AF/MF mode and a relative position signal is input from the relative position detecting sensor connected to the gear 70B of the first focus ring 70, the driving portion controls the driving of the focus lens 36 on the basis of the relative position signal and moves the focus lens 36 by a distance corresponding to amount of rotation of the first focus ring 70,” thus the focus specification directly relates to the rotation of the motor and the corresponding position of the focusing lens. Mori, Paragraph 72. See statement of motivation in Claim 9.)
Regarding Claim 11: “The method of claim 9, using an algorithm to determine the amount of time to power a servo motor to rotate the servo motor a desired number of rotation degrees clockwise or counterclockwise to achieve a desired position of the servo motor benchmark zero-degree marker relative to the servo motor 360-degree marker.” (“A focus driver 213 drives the AF motor 16 in accordance with the information regarding the amount [time and angle] and direction by which the second lens unit 12 is to be driven,” Ishikawa, Paragraph 62, 34. Similarly see Mori, Paragraph 72. See statement of motivation in Claim 9.)
Regarding Claim 18: “The method of claim 15, wherein a step of automatically focusing comprises the operator sending instructions to the camera system for rotating a focus gear a number of degrees clockwise or counterclockwise.” (“focus settings may have to be manually adjusted to recalibrate camera to capture an image of a scene with desired objects in focus.” Shanmugam, Paragraph 3 and automation in Paragraph 13. See use of focusing rings and gears in See Mori, Paragraph 58 and 72. See statement of motivation in Claim 9.)
Regarding Claim 19: “The method of claim 18, wherein an amount of time to rotate the focus gear is based on an algorithm to determine an amount of time to activate a servo motor for rotating the focus gear the number of degrees clockwise or counterclockwise.” (“A focus driver 213 drives the AF motor 16 in accordance with the information regarding the amount [time and angle] and direction by which the second lens unit 12 is to be driven, the information being transmitted from the camera CPU,” and thus is a product of a processed algorithm. See, Ishikawa, Paragraph 62, 34.)
Regarding Claim 20: “The method of claim 19, wherein vibrations caused by activation of the servo motor are damped by a vibration damping element proximate the servo motor.” (“The focal shake correction is performed by driving the AF motor 16 in accordance with the information regarding the amount and direction by which the second lens unit 12 is to be driven for the focal shake correction,” where the second lens unit proximate the AF motor acts as a vibration dampening element in this embodiment. Ishikawa, Paragraph 62. See another embodiment in Paragraph 36.)
Regarding Claim 27: “The method of claim 23, determining and recording a required time to activate camera system focus components to achieve the final focus parameters.” (“A focus driver 213 drives the AF motor 16 in accordance with the information regarding the amount [time and angle] and direction by which the second lens unit 12 is to be driven, the information being transmitted from the camera CPU,” and thus is a product of a processed algorithm. See, Ishikawa, Paragraph 62, 34. See statement of motivation in Claim 9.)
Claim 28 is rejected for reasons stated for Claim 20 in view of the Claim 23 rejection.
Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over US 20180225869 Upendran (“Upendran”) in view of US 20200145583 to Shanmugam (“Shanmugam”) US 20080169922 to Issokson (“Issokson”) in view of US 8879813 to Solanki (“Solanki”).
Regarding Claim 30: “The method of claim 29, wherein the pixel characteristics comprise a number of pixels.” (Note that this is claimed in the context of “determining pixel characteristics of the onsite target object in the onsite image and stored target object in the stored benchmark image” in claim 29.
Upendran and Shanmugam do not teach the above claim feature in the claimed context. As noted in Claims 1 and 29, Upendran and Shanmugam compare stored and new images and objects using properties of a number of the identified pixels, but do not reference the number of pixels directly.
Solanki teaches the above claim feature in the context of comparing image properties: “validates that the images in an encounter share the same fundus mask by computing the image-level fundus masks and ensuring that the two masks obtained differ in less than, for example, 10% of the total number of pixels in each image.” Solanki, Column 21, lines 19-21.
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to supplement the teachings of Upendran and Shanmugam to use the number of pixels as an image characteristic that can be compared to other images, as taught in Solanki, in order compare images based on statistical properties of their pixels. See Shanmugam, Paragraph 32 and Solanki, Column 21, lines 19-21.
Finally, in reviewing the present application, there does not seem to be objective evidence that the claim limitations are particularly directed to: addressing a particular problem which was recognized but unsolved in the art, producing unexpected results at the level of the ordinary skill in the art, or any other objective indicators of non-obviousness.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIKHAIL ITSKOVICH whose telephone number is (571)270-7940. The examiner can normally be reached Mon. - Thu. 9am - 8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MIKHAIL ITSKOVICH/Primary Examiner, Art Unit 2483