Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Sweden on 28 September 2022. It is noted, however, that applicant has not filed a certified copy of the Swedish application as required by 37 CFR 1.55. Examiner acknowledges that Applicant provided an access code to the priority document; however, the documents were unrecoverable and, pursuant to 37 CFR 1.55, Applicant bears the ultimate responsibility for ensuring that a copy of the foreign application is provided to the Office. However, Examiner has still considered the priority date of the Swedish application as a relevant effective filing date of the instant application for search purposes.
Response to Amendment
In response to the Amendment to independent claims 1, 11, and 20, the rejections of claims 1, 2 , and 8-10 under 35 U.S.C. § 102(a)(1) and claims 3-7 and 10-20 under 35 U.S.C. § 103 are withdrawn. . However, upon further consideration, a new ground of rejection is made of claim 1 in view of Berkovich in view of Liu and a new ground of rejection is made of claims 11 and 20 in view of Berkovich in view of Liu and in further view of Kuldkepp.
The rejection of claim 20 under 35 U.S.C. 112(b) is maintained.
The objection to claim 7 is withdrawn.
Response to Arguments
Applicant’s arguments filed 04 February 2026 with respect to the application of prior art within the rejection of claims 1-20 under 35 U.S.C. § 103 have been fully considered, but they are not persuasive. Applicant’s arguments, as best understood, are summarized below:
Examiner’s use of Riguer is non-analogous art under MPEP § 2141.01 due to addressing a different problem in a completely different technical field in virtual/augmented reality head-mounted devices (HMDs), rendering the combination invalid as it would be illogical and suffer from latency.
Combination of Berkovich and Kuldkepp, despite providing a potential ROI selection advantage, would abandon core sequential approaches shared by the references.
Respectfully, Examiner disagrees.
MPEP § 2141.01(a), with respect to the application of analogous and non-analogous art, states the following:
In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103 , the reference must be analogous art to the claimed invention. A reference is analogous art to the claimed invention if:
reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or
the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention).
Note that "same field of endeavor" and "reasonably pertinent" are two separate tests for establishing analogous art; it is not necessary for a reference to fulfill both tests in order to qualify as analogous art.
Thus, the disclosure of Riguer, to be analogous art, must satisfy one of these two tests. The following is Examiner’s logic with respect to both tests:
Riguer, like Berkovich and Kuldkepp (and newly-applied reference Liu in response to the amendment to the independent claims) is directed to imaging and video within head mounted displays, specifically including body part detection. Although addressing a different problem (as Applicant indicates, the disclosure of Riguer is directed to reduced buffering for blending images from an HMD image stream), all four of the applied prior arts explicitly disclose eye tracking, with the disclosure of Riguer being directed to eye saccade detection and prediction, with regions of varying resolution existing in ranges based on foveal region. This, in its most basic form, is an image with a specific region of high resolution (for the region with the highest visual acuity) within a lower resolution background for regions of lower visual acuity. Riguer further discloses within para. 0044 wherein the location of highest acuity (essentially a region of interest demanding the highest resolution) need not necessarily be in the center of the image. As a result, despite not being directed to body part detection exclusively, the method of Riguer discloses body part detection and multi-resolution image segments of varying sizes, smaller, higher-resolution segments being smaller than full sensor size, and intermediate and additional resolutions of the image; this would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention.
Even if Examiner were to acquiesce to Applicant’s assertion that Riguer’s disclosure was from an external technical field, this does not preclude the application of the Riguer reference, nor is any resulting combination illogical. Paragraphs 6-8 of the instant application’s Specification detail the desire for low-latency body part detection which optimizes both image resolution and frame rate. Latency-minimization based on resolution and framerate optimization is also the object of the method and system of Riguer. Riguer discloses explicitly (para. 0001, “ In order to create an immersive environment for the user, VR and AR video streaming applications typically require high resolution and high frame-rates, which equates to high data-rates. In the case of VR and AR displays, it is wasteful to transmit the full uniform resolution image as is commonly done today”), indicating that prioritizing high-resolution, high-acuity image regions while resolution decreases with increased distance from the foveal region is a clear, tested solution enabling efficient operations. Combining this with the disclosure of saccade tracking to determine foveal region, one having ordinary skill in the art in search of a method of framerate-resolution optimization would be able to leverage the disclosure of Riguer regarding resolutions to optimize a tracking system by ensuring key regions of an image sensor assembly maintain a high resolution, while limiting other regions’ resolutions for computational efficiency.
Thus, the application of the Riguer reference, as well as all rejections made in view of Riguer, are maintained.
Furthermore, Applicant asserts that any combination of Berkovich and Kuldkepp “abandon[s] the core sequential approach shared by both references”. However:
The combination of Berkovich and Kuldkepp is a combination wherein the disclosure of Kuldkepp is used to inform the application environment, ensuring maximum resolution of the image sensor for imaging particular images, and disclosing further body parts for detection. The combination of Berkovich and Kuldkepp in no way integrates the entire method and system of Kuldkepp within that of Berkovich; rather, it uses the method and system of Kuldkepp to inform operations and applications of the method and system of Berkovich.
Applicant acquiesces that there is a rationale for combination, by indicating the improvement for ROI selection. This indicates a logical, structured integration of the teachings of Kuldkepp into the method of Berkovich.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 20 recites the limitation "the instructions” in line 1. There is insufficient antecedent basis for this limitation in the claim. The claim limitation is being interpreted as simply being “instructions”, and advises Applicant to remove the word “the”, which lacks antecedent basis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 and 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Berkovich et al. (US PG Pub 20200195875, hereinafter “Berkovich”) in further view of Liu et al. (WIPO PG Pub 2021226411, hereinafter “Liu”).
Regarding claim 1, Berkovich discloses a dynamically programmable imaging system, used for object-of-interest tracking, the system comprising:
An image sensor (para. 0039, wherein the image sensor includes an array of pixel cells); and
A controller in communication with the image sensor (para. 0043, wherein the controller is a part of the image system and can determine the presence of object features while also controlling the image sensor); wherein the controller is configured to:
obtain from the image sensor first and second image segments acquired by the image sensor at the same time (paras. 0043-0046, wherein the first segment is the first subset of pixel cells programmed by the programming signals of the image controller enhanced to a higher resolution through increased quantization or bit lengths, and the second segment is the full-frame, non-subset, original resolution image),
wherein the first image segment is acquired by the image sensor at a first resolution and the second image segment is acquired by the image sensor at a second resolution (paras. 0043-0046, wherein the first segment is the first subset of pixel cells programmed by the programming signals of the image controller enhanced to a higher resolution through increased quantization or bit lengths, and the second segment is the full-frame, non-subset, original resolution image);
wherein the first image segment is smaller than the full sensor image size corresponding to the full field of view of the image sensor (para. 0044 and fig. 7B, wherein the region of interest is a subset of the overall image, and wherein the first image segment is acquired using only a subset of the image segment pixel array corresponding to the ROI, necessarily being smaller than the full sensor image size corresponding to the full field of view of the image sensor), and has a location and size corresponding to the image of an object-of-interest within the field of view in the image plane of the image sensor (para. 0044 and fig. 7B, wherein the region of interest is a subset of the overall image, and wherein the first image segment is acquired using only a subset of the image segment pixel array corresponding to the ROI); wherein the first resolution is higher than the second resolution (paras. 0043-0046, wherein the first subset of pixels, corresponding to the ROI, have their image quality/resolution increased through (but not limited to) increases in quantization and pixel bit lengths and the second subset of pixel cells correspond to the original image quality and full size of the image array).
Berkovich does not disclose wherein the first and second image segments are distinct spatial regions of a single image frame captured during a common exposure interval by the image sensor.
However, Liu discloses wherein the first and second image segments are distinct spatial regions of a single image frame captured during a common exposure interval by the image sensor (paras. 0004-0006 and fig. 2-3 for clear visualization). Specifically, Liu discloses an image sensor assembly containing a plurality of stacked sensor layers, allowing for image capture on a pixel array, subsampling through a combination of pixel regions, and convolutional operations. The image sensor enables capture at different resolutions for different regions of an image frame due to the presence of the stacked processing frames within the sensor itself, allowing for temporal data alignment and reducing computational timing and overhead. Therefore, the method of Liu serves as the logical “next-step” improvement for the image sensor disclosed by Berkovich. More specifically, the method and system of Berkovich, which relies on extra time for controller configuration to send control signals to and from the image sensor for resolution control, would greatly be improved in terms of efficiency within the function of both its image sensor (which would be more efficient and would avoid the need for re-programming) as well as its eye tracking method (more, high resolution pixel focus on the eye region with lower-resolution subsampled pixels in the surroundings). In this way, one having ordinary skill in the art prior to the effective filing date of the claimed invention would have recognized that the method and system of Berkovich would be improved by the disclosure of Liu and would have combined them as the use of a known technique to improve similar devices in the same way.
Regarding claim 2, Berkovich discloses all limitations of claim 1. Berkovich further discloses wherein the controller is configured to obtain each of the image segments by: sending a signal to the image sensor, the signal specifying a boundary of the respective image segment and receiving image data from the image sensor, the image data representing the image captured within the boundary of the respective image segment at the required resolution (paras. 0043-0045, wherein the controller, in response to the processor detecting an object/region of interest within a prior frame and noting its location, is configured to send a subset of programming signals to the image sensor, wherein the subset of programming signals are configured to communicate that the array of pixel cells corresponding to the ROI should the region of interest differently from the remainder of the image, necessarily noting the boundaries of the image segments).
Regarding claim 8, Berkovich discloses all limitations of claim 1. Berkovich further discloses wherein the controller is configured to obtain a plurality of image frames in sequence, and determine a location and/or size of at least one of the image segments in a given image frame based on the respective image segment obtained in one or more preceding image frame (paras. 0043-0045, wherein the image apparatus comprising the image sensor and its controller and processor is configured to capture multiple frames in series and detect object pixel locations and features in current and subsequent frames based on detected positions and features in prior frames).
Regarding claim 9, Berkovich discloses all limitations of claim 8. Berkovich further discloses wherein the controller is configured to determine the location and/or size of the respective image segment by predicting the location and/or size of the image of the respective body part in a given image frame based on the image of the respective body part in one or more preceding image frame (paras. 0043-0045 and 0048, wherein the detected locations and features of the ROI are extracted and can be used, either by themselves or in conjunction with a motion model, to predict the position of the ROI/eye/body part within subsequent image frames).
Regarding claim 10, Berkovich discloses all limitations of claim 8. Berkovich further discloses wherein the controller is configured to set the size of the respective image segment in a given frame by adding a predetermined margin around the image of the respective body part in one or more preceding image frame (paras. 0063-0064 and 0071, the disclosure of the eye box, wherein the eye box is a detected region where the body part (in this embodiment, the eye) is present and acts as a predetermined margin, and wherein the size and orientation of the eye box can be preset).
Claims 3, 5, 11-12, 14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Berkovich in view of Liu and in further view of Kuldkepp et al. (US PG Pub 20160117555, hereafter referred to as Kuldkepp).
Independent claim 11 is rejected, mutatis mutandis, for reasons similar to claim 1. Berkovich in view of Liu does not further disclose the application of this dynamically programmable image system into applications of animal body part tracking.
However, Kuldkepp discloses wherein an image sensor setup configured to capture a region of interest at a different resolution from a background is applied to animal body part tracking (Abstract, “based on the registered image frames, the data processing unit produces eye/gaze tracking data with respect to the subject”, wherein the registered image frames are the result of a segmentation of regions of interest in images with different resolutions). Specifically, Kuldkepp discloses a method and system of image registration of images with different resolutions for eye and gaze tracking. Therefore, both Berkovich in view of Liu and Kuldkepp disclose region-of-interest object detection and tracking methods using multiple image resolutions of images to conserve computational resources while including high-resolution segments for accurate tracking. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have applied the method and system of Berkovich in view of Liu within the environment of Kuldkepp as a motivation in the prior art that would have led one having ordinary skill to modify the prior art reference; specifically, the environment of Kuldkepp would provide motivation to apply the method and system of Berkovich in view of Liu to animal body part tracking.
Regarding claim 3, Berkovich in view of Liu discloses all limitations of claim 1. Berkovich in view of Liu does not disclose wherein the first resolution is the maximum resolution of the image sensor.
However, Kuldkepp discloses wherein, in an animal body part tracking workflow specifically focused on eye and gaze detection, the first resolution is the maximum resolution of the image sensor (para. 0004, “typically, in each image, the highest possible resolution that the sensor can provide is used”). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have utilized the maximum resolution of the image sensor as the greater of the two resolutions according to the rationale of claim 11.
Regarding claim 5, Berkovich in view of Liu discloses all limitations of claim 1. Berkovich in view of Liu does not disclose wherein the second image segment has a location and size corresponding to the image of a second body part of the animal within the field of view in the image plane of the image sensor.
However, Kuldkepp discloses wherein the second image segment has a location and size corresponding to the image of a second body part of the animal within the field of view in the image plane of the image sensor (paras. 0039-0044 and fig. 17, wherein the second image segment is the lower-resolution, wider-FOV frame in a series of frames which contains a face, which is necessarily comprised of a plurality of body parts, any one of which may be the second body part). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have identified the second body part’s size and location within the second image segment within the field of view of the image plane of the image sensor according to the rationale of claim 11.
Regarding claim 12, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11, respectively. Berkovich further discloses wherein the controller is configured to obtain each of the image segments by: sending a signal to the image sensor, the signal specifying a boundary of the respective image segment and receiving image data from the image sensor, the image data representing the image captured within the boundary of the respective image segment at the required resolution (paras. 0043-0045, wherein the controller, in response to the processor detecting an object/region of interest within a prior frame and noting its location, is configured to send a subset of programming signals to the image sensor, wherein the subset of programming signals is configured to communicate that the array of pixel cells corresponding to the ROI should the region of interest differently from the remainder of the image, necessarily noting the boundaries of the image segments).
Regarding claim 14, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Kuldkepp further discloses wherein the second image segment has a location and size corresponding to the image of a second body part of the animal within the field of view in the image plane of the image sensor (paras. 0039-0044 and fig. 17, wherein the second image segment is the lower-resolution, wider-FOV frame in a series of frames which contains a face, which is necessarily comprised of a plurality of body parts, any one of which may be the second body part). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have identified the second body part’s size and location within the second image segment within the field of view of the image plane of the image sensor according to the rationale of claim 11.
Regarding claim 17, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Berkovich further discloses wherein the controller is configured to obtain a plurality of image frames in sequence, and determine a location and/or size of at least one of the image segments in a given image frame based on the respective image segment obtained in one or more preceding image frame (paras. 0043-0045, wherein the image apparatus comprising the image sensor and its controller and processor is configured to capture multiple frames in series and detect object pixel locations and features in current and subsequent frames based on detected positions and features in prior frames).
Regarding claim 18, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 17. Berkovich further discloses wherein the controller is configured to determine the location and/or size of the respective image segment by predicting the location and/or size of the image of the respective body part in a given image frame based on the image of the respective body part in one or more preceding image frame (paras. 0043-0045 and 0048, wherein the detected locations and features of the ROI are extracted and can be used, either by themselves or in conjunction with a motion model, to predict the position of the ROI/eye/body part within subsequent image frames).
Regarding claim 19, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 17. Berkovich further discloses wherein the controller is configured to set the size of the respective image segment in a given frame by adding a predetermined margin around the image of the respective body part in one or more preceding image frame (paras. 0063-0064 and 0071, the disclosure of the eye box, wherein the eye box is a detected region where the body part (in this embodiment, the eye) is present and acts as a predetermined margin, and wherein the size and orientation of the eye box can be preset).
Regarding claim 20, as best understood, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Berkovich further discloses a non-transitory computer-readable medium storing the instructions which, when executed by a processor, causes the processor to perform the method of claim 11 (paras. 0136-0137).
Claims 4 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Berkovich in view of Liu and in further view of Riguer et al. (US PG Pub 20210089119, hereafter referred to as Riguer)
Regarding claim 4, Berkovich in view of Liu discloses all limitations of claim 1. Berkovich in view of Liu does not explicitly disclose wherein the second image segment is smaller than the full sensor image size.
However, Riguer discloses wherein the second image segment is smaller than the full sensor image size (para. 0029, wherein the collection of four images at four scales and four resolutions is disclosed using the same image sensor, wherein the second image segment of the instant application is analogous to the third image of third-highest resolution outside of the first two image segments of highest resolution, and the second image segment would necessarily be smaller than the full sensor image size as the sensor is further configured to capture a larger, lower-resolution background image beyond the third image segment). Specifically, Riguer discloses a method and system for blending resolutions of multiple visual information streams together for gaze detection within a virtual-reality head-mounted display. Therefore, Berkovich in view of Liu and Riguer disclose image sensor mediated methods of observation across resolutions for body part detection and segmentation out of other images. Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the second image segment size to the method and system of Berkovich in view of Liu as a teaching in the prior art which would have led one having ordinary skill to modify the method of Berkovich in view of Liu; specifically, the disclosure of Riguer disclosing multiple different resolutions for increasingly granular identification of ocular movements would have led one having ordinary skill in the art to have modified the method and system of Berkovich in view of Liu into account for further resolution decreases in unimportant image areas to reduce processing time and resource usage by using less than the full sensor image size for the second resolution.
Regarding claim 6, Berkovich in view of Liu discloses all limitations of claim 1. Berkovich in view of Liu does not disclose wherein the controller is configured to obtain a further image being an image of the full field of view of the image sensor at a resolution lower than the second resolution.
However, Riguer discloses wherein the controller is configured to obtain a further image being an image of the full field of view of the image sensor at a resolution lower than the second resolution (para. 0029, wherein the background image collected at the lowest resolution of all four images is the further image collected using the full field of view (FOV) of the image sensor). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the background, full FOV image to the method and system of Berkovich in view of Liu as modified according to the rationale of claim 4.
Regarding claim 7, Berkovich in view of Liu discloses all limitations of claim 1. Berkovich in view of Liu does not disclose wherein the controller is configured to obtain one or more further image segment at one or more further resolution intermediate the first and second resolutions. However, Riguer discloses wherein the controller is configured to obtain one or more further image segment at one or more further resolution intermediate the first and second resolutions (para. 0029, wherein the collection of four images at four scales and four resolutions is disclosed using the same image sensor, wherein the intermediate image segment of the instant application is analogous to the second image of second-highest resolution outside of the first image segment of highest resolution). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the collection of an intermediate resolution image within the method and system of Berkovich in view of Liu as modified according to the rationale of claim 4.
Claims 13 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Berkovich in view of Liu and Kuldkepp and in further view of Riguer.
Regarding claim 13, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Berkovich in view of Liu and in further view of Kuldkepp does not explicitly disclose wherein the second image segment is smaller than the full sensor image size.
However, Riguer discloses wherein the second image segment is smaller than the full sensor image size (para. 0029, wherein the collection of four images at four scales and four resolutions is disclosed using the same image sensor, wherein the second image segment of the instant application is analogous to the third image of third-highest resolution outside of the first two image segments of highest resolution, and the second image segment would necessarily be smaller than the full sensor image size as the sensor is further configured to capture a larger, lower-resolution background image beyond the third image segment). Specifically, Riguer discloses a method and system for blending resolutions of multiple visual information streams together for gaze detection within a virtual-reality head-mounted display. Therefore, Berkovich and Liu and Kuldkepp and Riguer disclose image sensor mediated methods of observation across resolutions for body part detection and segmentation out of other images. Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the second image segment size to the method and system of Berkovich in view of Liu and in further view of Kuldkepp as a teaching in the prior art which would have led one having ordinary skill to modify the method of Berkovich in view of Liu and in further view of Kuldkepp; specifically, the disclosure of Riguer disclosing multiple different resolutions for increasingly granular identification of ocular movements would have led one having ordinary skill in the art to have modified the method and system of Berkovich in view of Liu and in further view of Kuldkepp to account for further resolution decreases in unimportant image areas to reduce processing time and resource usage by using less than the full sensor image size for the second resolution.
Regarding claim 15, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Berkovich in view of Liu and in further view of Kuldkepp does not disclose wherein the controller is configured to obtain a further image being an image of the full field of view of the image sensor at a resolution lower than the second resolution.
However, Riguer discloses wherein the controller is configured to obtain a further image being an image of the full field of view of the image sensor at a resolution lower than the second resolution (para. 0029, wherein the background image collected at the lowest resolution of all four images is the further image collected using the full field of view (FOV) of the image sensor). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the background, full FOV image to the method and system of Berkovich as modified by Liu and Kuldkepp according to the rationale of claim 13.
Regarding claim 16, Berkovich in view of Liu and in further view of Kuldkepp discloses all limitations of claim 11. Berkovich in view of Liu and in further view of Kuldkepp does not disclose wherein the controller is configured to obtain one or more further image segment at one or more further resolution intermediate the first and second resolutions. However, Riguer discloses wherein the controller is configured to obtain one or more further image segment at one or more further resolution intermediate the first and second resolutions (para. 0029, wherein the collection of four images at four scales and four resolutions is disclosed using the same image sensor, wherein the intermediate image segment of the instant application is analogous to the second image of second-highest resolution outside of the first image segment of highest resolution). Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the disclosure of Riguer with respect to the collection of an intermediate resolution image within the method and system of Berkovich as modified by Liu and Kuldkepp according to the rationale of claim 13.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN TEJAS MUKUNDHAN whose telephone number is (571)272-2368. The examiner can normally be reached Monday - Friday 9AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 5712723838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROHAN TEJAS MUKUNDHAN/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698