Prosecution Insights
Last updated: April 19, 2026
Application No. 17/746,648

METHODS AND SYSTEMS FOR IMPROVING VIDEO ANALYTIC RESULTS

Final Rejection §103
Filed
May 17, 2022
Examiner
CHANG, DANIEL
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Honeywell International Inc.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
2y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
233 granted / 367 resolved
+5.5% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to the remark entered on 10/01/2025. Claims 1-2, 5-8, 21-26 & 28 are pending in the instant application. Claims 3-4, 9-20 & 27 are cancelled. Response to Arguments Applicant's remarks filed 10/01/2025, pages 7-11, regarding the rejection of claim 1, and similarly claims 23 & 25 under 35 USC 103 have been fully considered, but they are not persuasive. Applicant first asserts that Wuergler is not analogous art, specifically alleging that, “Wuergler is related to vehicle hitching,” and not concerned with maintaining analytic accuracy of a video stream. The Examiner respectfully disagrees and deems this point moot because it is Lee that is depended upon to disclose of the video analytics algorithm, and not Wuergler. Lee, the background and summary of the invention, recites multiple times of classification algorithms, image enhancement algorithms, image tone adjustment algorithms, and target detection algorithms in relation to detecting and classifying a license plate detection method and apparatus. Wuergler is merely depended upon to teach known video and image techniques to further enhance the license plate detection method and apparatus of Lee. In response to Applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The Applicant next asserts that Wuergler does not teach or suggest of an adjustment to the “desired minimum frame per second (FPS) parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream.” The Examiner respectfully disagrees because it is the combination of Burch and Wuergler that teach or suggest the above limitations. Burch, in Paragraphs [0034], [0041]-[0043], [0054], teaches of storing selection parameters, wherein selection parameters include width, height, scaling factor, as desired minimum frame resolution parameters, since video frame resolutions are represented by size, namely, width and height, and thus is a resolution. However, Burch does not teach of adjustment of the above resolution parameters, and thus introduces Wuergler to teach or suggest adjusting the resolution of Burch to a desired minimum frame resolution. In Paragraphs [0021]-[0023] & [0036]-[0039], Wuergler teaches of automatic adjustment of the resolution of the dynamic pixel images, wherein the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), e.g., to command optical zoom of the camera 20 as video camera settings. In other embodiments, the controller 50 may then process the collected pixel images (arrow 25) in software by cropping the collected pixel images. By cropping a an image, it reduces its resolution because it removes pixels, cutting down on the total pixel count and detail. Therefore the combination of Burch and Wuergler teach or suggest of adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, including one or more of the desired minimum frame per second, parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream. The Applicant then asserts that one of ordinary skill in the art would not be motivated to combine Wuergler to Lee and Burch to arrive at the invention of claim 1. The Examiner respectfully disagrees. Applicant's attention is directed to MPEP § 2141.01 (a) I, where stated: The Examiner must determine what is "analogous prior art" for the purpose of analyzing the obviousness of the subject matter at issue. "In order to rely on a reference as a basis for rejection of an Applicant's invention, the reference must either be in the field of Applicant’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned." In re Oetiker, 977 F.2d 1443, 1446, 24 USPQ2d 1443, 1445 (Fed. Cir. 1992). See also In re Deminski, 796 F.2d 436, 230 USPQ 313 (Fed. Cir. 1986); In re Clay, 966 F.2d 656, 659, 23 USPQ2d 1058, 1060-61 (Fed. Cir. 1992) ("A reference is reasonably pertinent if, even though it may be in a different field from that of the inventor's endeavor, it is one which, because of the matter with which it deals, logically would have commended itself to an inventor's attention in considering his problem."); Wang Laboratories Inc. v. Toshiba Corp., 993 F.2d 858, 26 USPQ2d 1767 (Fed. Cir. 1993); and State Contracting & Eng'g Corp. v. Condotte America, Inc., 346 F.3d 1057, 1069, 68 USPQ2d 1481, 1490 (Fed. Cir. 2003) (where the general scope of a reference is outside the pertinent field of endeavor, the reference may be considered analogous art if subject matter disclosed therein is relevant to the particular problem with which the inventor is involved). In the instant case, one of ordinary skill in the art would have searched and considered what was already known in the state of the art in the field of video and imaging, and pertinent to the problem of adjusting camera and video parameters as in Lee, Burch, and Wuergler before the effective filing date of the invention. Finally, the strongest rationale for combining references is a recognition, expressly or impliedly in the prior art or drawn from a convincing line of reasoning based on established scientific principles or legal precedent, that some advantage or expected beneficial result would have been produced by their combination. In re Sernaker, 702 F.2d 989, 994-95, 217 USPQ 1, 5-6 (Fed. Cir. 1983). See also Dystar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick, 464 F.3d 1356, 1368, 80 USPQ2d 1641, 1651 (Fed. Cir. 2006) ("Indeed, we have repeatedly held that an implicit motivation to combine exists not only when a suggestion may be gleaned from the prior art as a whole, but when the ‘improvement’ is technology-independent and the combination of references results in a product or process that is more desirable, for example because it is stronger, cheaper, cleaner, faster, lighter, smaller, more durable, or more efficient. Because the desire to enhance commercial opportunities by improving a product or process is universal—and even common-sensical—we have held that there exists in these situations a motivation to combine prior art references even absent any hint of suggestion in the references themselves."). As discussed in the rejection previously and below, it would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution adjustment features in Wuergler as above, to enhance the dynamic pixel images through camera operations including panning, tilting, and zooming to improve upon possible limitations as Wuergler states in Paragraph [0004] & [0022]. The Applicant lastly makes one last assertion that Wuergler does not teach or suggest the claim limitations of the video camera having one or more video camera settings that control one or more video parameters of the video stream that is provided by the video camera and adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, including one or more of the desired minimum frame per second (FPS) parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream. The Examiner respectfully disagrees. As stated above, it is the combination of Burch and Wuergler that teach or suggest the above limitations. Burch, in Paragraphs [0034], [0041]-[0043], [0054], teaches of storing selection parameters, wherein selection parameters include width, height, scaling factor, as desired minimum frame resolution parameters, since video frame resolutions are represented by size, namely, width and height, and thus is a resolution. However, Burch does not teach of adjustment of the above resolution parameters, and thus introduces Wuergler to teach or suggest adjusting the resolution of Burch to a desired minimum frame resolution. In Paragraphs [0021]-[0023] & [0036]-[0039], Wuergler teaches of automatic adjustment of the resolution of the dynamic pixel images, wherein the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), e.g., to command optical zoom of the camera 20 reading as video camera settings. In other embodiments, the controller 50 may then process the collected pixel images (arrow 25) in software by cropping the collected pixel images. By cropping a an image, it reduces its resolution because it removes pixels, cutting down on the total pixel count and detail. Therefore the combination of Burch and Wuergler teach or suggest of the video camera having one or more video camera settings that control one or more video parameters of the video stream that is provided by the video camera and adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, including one or more of the desired minimum frame per second (FPS) parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream. Therefore the rejection of claims 1, 23 & 25 is maintained under 35 USC 103. Applicant’s remarks filed 10/01/2025, page 11, with respect to the rejection of claims 2, 5-8, 21-22, 24, 26 & 28 under 35 USC 103 have been fully considered, but they are not persuasive. Applicant first relies on the patentability of the claims from which these claims depend to traverse the rejection without prejudice to any further basis for patentability of these claims based on the additional elements recited. Examiner cannot concur with the Applicant because the combination of Lee, Burch, and Wuergler teach independent claim 1, 23 & 25 as outlined below. Thus, claims 2, 5-8, 21-22, 24, 26 & 28 are also rejected for the similar reasons as outlined below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 22-26 & 28 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (CN 111797694 A) (hereinafter Lee) in view of Burch et al. (US 2019/0073747 A1) (hereinafter Burch), and further in view of Wuergler et al. (US 2017/0151846 A1) (hereinafter Wuergler). Regarding claim 1, Lee discloses a method of improving performance of a video analytics algorithm, the video analytics algorithm configured to receive and analyze a video stream provided by a video camera [Pg. 7, Sixth Paragraph, Present invention improves overall contrast, hue saturation, thereby improving the accuracy of license plate detection within images as video analytics], the method comprising: identifying one or more of the video parameters of the video stream [Pg. 8-9, Seventh Paragraphs, Determining brightness level of the image, as video parameter]; comparing one or more of the video parameters of the video stream with a corresponding one of the desired video parameters of the set of desired video parameters to ascertain whether one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the set of desired video parameters by at least a threshold amount [Pg. 8-9, Seventh Paragraphs, Determining brightness level of the image, comparing to first, second, and third brightness levels, as desired video parameters, and determine how many grayscale pixels (0-50, 50-128, or 128-255) is in the grayscale range, as threshold amounts, belonging to first, second, and third brightness levels, respectively]; and when one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the set of desired video parameters by at least the threshold amount, adjusting one or more of the video parameters of the video stream toward the corresponding one of the desired video parameters of the set of desired video parameters to increase the accuracy level of the video analytics algorithm [Pg. 10, tenth to eleventh paragraphs, Specifically, according to the relationship between the average brightness threshold and the average brightness value Lw, it is determined whether the brightness of the current image to be detected is low, and when it is determined that the brightness of the image to be detected is low, a gamma value that increases the brightness of the image to be detected is set]. However, Lee does not explicitly disclose the method comprising storing a set of desired video parameters for achieving a desired accuracy level for the video analytics algorithm, wherein the set of desired video parameters include one or more of a desired minimum frame per second (FPS) parameter for the video stream, a desired minimum frame resolution parameter for the video stream, and a desired minimum bit rate parameter for the video stream. Burch teaches of the method comprising storing a set of desired video parameters for achieving a desired accuracy level for the video analytics algorithm, wherein the set of desired video parameters include one or more of a desired minimum frame per second (FPS) parameter for the video stream, a desired minimum frame resolution parameter for the video stream, and a desired minimum bit rate parameter for the video stream [Paragraphs [0034], [0041]-[0043], [0054], data store 420 can store selection parameters, wherein selection parameters include width, height, scaling factor, as desired minimum frame resolution parameter]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution selection parameters in Burch as above, for scaling, in order to improve the quality of video frames displayed on a display device as video frames are rendered and displayed in an output resolution that is greater than an original output resolution in which the video frames were intended to be rendered and displayed (Burch, Paragraph [0002], [0017] & [0025]). Lastly, Lee and Burch do not explicitly disclose the video camera having one or more video camera settings that control one or more video parameters of the video stream that is provided by the video camera and adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, including one or more of the desired minimum frame per second (FPS) parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream. Wuergler teaches the video camera having one or more video camera settings that control one or more video parameters of the video stream that is provided by the video camera and adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, including one or more of the desired minimum frame per second (FPS) parameter for the video stream, the desired minimum frame resolution parameter for the video stream, and the desired minimum bit rate parameter for the video stream [Paragraphs [0021]-[0023] & [0036]-[0039], automatic adjustment of the resolution of the dynamic pixel images, as video parameter, the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), e.g., to command panning, tilting, or optical zoom of the camera 20 as video camera settings. In other embodiments, the controller 50 may process the collected pixel images (arrow 25) in software by cropping, zooming, or enhancing the collected pixel images]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution adjustment features in Wuergler as above, to enhance the dynamic pixel images through camera operations including panning, tilting, and zooming to improve upon possible limitations (Wuergler, Paragraph [0004] & [0022]). Regarding claim 2, Lee, Burch, and Wuergler disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Wuergler teaches wherein the set of desired video parameters comprises a desired video camera setting parameter, wherein the desired video camera setting parameter comprises one or more of a camera focus parameter, a camera zoom parameter, a camera tilt parameter and a camera pan parameter [Paragraphs [0021]-[0023] & [0036]-[0039], automatic adjustment of the resolution of the dynamic pixel images, as video parameter, the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), as camera zoom parameter, a camera tilt parameter and a camera pan parameter, e.g., to command panning, tilting, or optical zoom of the camera 20 as video camera settings]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution adjustment features in Wuergler as above, to enhance the dynamic pixel images through camera operations including panning, tilting, and zooming to improve upon possible limitations (Wuergler, Paragraph [0004] & [0022]). Regarding claim 5, Lee, Burch, and Wuergler disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Lee discloses wherein the video analytics algorithm comprises one of a facial recognition algorithm, a mask detection algorithm, a person count detection algorithm, a vehicle detection algorithm, a unattended bag detection algorithm, a shoplifting detection algorithm, a crowd detection algorithm, a person fall detection algorithm, and a jaywalking detection algorithm [Pg. 1-3, summary, target detection algorithm to determine an image of vehicle area to be detected, as vehicle detection algorithm in order to further perform license plate detection]. Regarding claim 22, Lee, Burch, and Wuergler disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Lee discloses wherein the set of desired video parameters comprises a desired scene lighting parameter for producing a desired lighting condition relative to a subject of interest in the Field of View (FOV) of the video camera for achieving the desired accuracy level for the video analytics algorithm [Pg. 8-9, Seventh Paragraphs, Determining brightness level of the image, comparing to first, second, and third brightness levels, as two or more of desired scene lighting parameters, Pg. 11, fig. 3, FOV shows the subject of interest is of a vehicle including a license plate]. Regarding claims 23-24, claims (23-24) are each drawn to methods of improving performance of a video analytics algorithm having limitations similar to the method of using the same as claimed in claim 2 treated in the above rejection. Therefore, method claims (23-24) correspond to method claims (2 & 2) and are rejected for the same reasons of obviousness as used above. Regarding claims 25-26 & 28, system claims (25-26 & 28) are drawn to the system using/performing the same method as claimed in claims (1-2 & 5). Therefore system claims (25-26 & 28) correspond to method claims (1-2 & 5) and are rejected for the same reasons of obviousness as used above. Furthermore, Lee discloses of a video camera for providing a video stream, and an input for receiving a video stream captured by a video camera, a memory for storing video analytics algorithm; and a controller operatively coupled to the input and the memory [Pgs. 8-9, Seventh-Ninth Paragraphs, Imaging analysis device 102/license plate detection apparatus 500 as input for receiving video stream captured by image acquisition device 101 as video camera, Pg. 13, eighth paragraph, memory 502/503 for storing software instruction operations on information processing device that includes target detection algorithm as video analytics algorithm, and CPU 501 within license plate detection apparatus 500 connected to memory 502 and image acquisition device 102 via apparatus 500]. Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (CN 111797694 A) (hereinafter Lee), Burch et al. (US 2019/0073747 A1) (hereinafter Burch), and Wuergler et al. (US 2017/0151846 A1) (hereinafter Wuergler) in view of Kamiya (US 2014/0321759 A1) (hereinafter Kamiya). Regarding claim 6, Lee, Burch, and Wuergler disclose the method of claim 5, and are analyzed as previously discussed with respect to the claim. However, Lee, Burch, and Wuergler do not explicitly disclose storing for each of a plurality of video analytics algorithms a corresponding set of desired video parameters for achieving a desired accuracy level for the respective video analytics algorithm; for each of a plurality of video analytics algorithms, comparing one or more of the video parameters of the video stream with the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms to ascertain whether one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms; and when one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms, adjusting one or more of the video parameters of the video stream toward the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms. Kamiya teaches of storing for each of a plurality of video analytics algorithms a corresponding set of desired video parameters for achieving a desired accuracy level for the respective video analytics algorithm [Paragraph [0028]-[0031], The storage section 11 stores programs regarding the recognition dictionaries describing reference data of detection objects and a method for performing image recognition using the recognition dictionaries, wherein the size, brightness, contrast and color of the same detection object, as corresponding set of desired video parameters for achieving desired accuracy level are described differently for the same model of each detection object in these recognition dictionaries]; for each of a plurality of video analytics algorithms, comparing one or more of the video parameters of the video stream with the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms to ascertain whether one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms; and when one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms, adjusting one or more of the video parameters of the video stream toward the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms [Paragraph [0028]-[0039], Plurality of different recognition dictionaries and an image recognition method. In step S106, the arithmetic processing section 10 selects one from the recognition dictionaries and one from the image recognition algorithms based on the distance specified in step S102 and the state of the light analyzed in step S104. More specifically, the arithmetic processing section 10 specifies the environmental conditions for the object image area based on the specified distance to the target and the specified state of the light including the brightness, contrast and color of the object image area, and thereafter, selects one from the recognition dictionaries and one from the image recognition algorithms which match the specified environmental conditions. In step S108, the arithmetic processing section 10 performs the image recognition process on the object image area using the recognition dictionary and the image recognition algorithm selected in step S106 to detect a detection object within the object image area. More specifically, the arithmetic processing section 10 scans the object image area using the selected recognition dictionary and performs the image recognition on the object image area using the selected image recognition algorithm. When the selected image recognition algorithm includes the image correcting function, the arithmetic processing section 10 corrects the object image area before performing the image recognition]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the multiple image recognition algorithms in Kamiya as above, to increase detection of objects in varying environments with time (Kamiya, Paragraphs [0006]-[0009]). However, Burch and Kamiya do not explicitly disclose ascertain whether one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms by at least a corresponding threshold amount; and when one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms by at least the corresponding threshold amount, adjusting one or more of the video parameters of the video stream toward the corresponding one of the desired video parameters. Lee discloses ascertain whether one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms by at least a corresponding threshold amount [Pg. 8-9, Seventh Paragraphs, Determining brightness level of the image, comparing to first, second, and third brightness levels, as desired video parameters, and determine how many grayscale pixels (0-50, 50-128, or 128-255) is in the grayscale range, as threshold amounts, belonging to first, second, and third brightness levels, respectively]; and when one or more of the video parameters of the video stream diverge from the corresponding one of the desired video parameters of the respective set of desired video parameters for the respective one of the plurality of video analytics algorithms by at least the corresponding threshold amount, adjusting one or more of the video parameters of the video stream toward the corresponding one of the desired video parameters [Pg. 10, tenth to eleventh paragraphs, Specifically, according to the relationship between the average brightness threshold and the average brightness value Lw, it is determined whether the brightness of the current image to be detected is low, and when it is determined that the brightness of the image to be detected is low, a gamma value that increases the brightness of the image to be detected is set]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee, Burch, and Kamiya to incorporate and implement the thresholding algorithms in Lee as above, to determine the brightness level to which the image to be detected belongs, wherein different brightness levels correspond to different gray scale ranges, and when it is determined that the brightness level to which the image to be detected belongs is the preset brightness level, the image to be detected is adjusted by using an image tone adjustment algorithm to obtain the adjusted image to be detected (Lee, Pg. 1, Background technique). Lastly, Lee, Burch, and Kamiya do not explicitly disclose adjust one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream provided by the video camera. Wuergler teaches to adjust one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream provided by the video camera [Paragraphs [0021]-[0023] & [0036]-[0039], automatic adjustment of the resolution of the dynamic pixel images, as video parameter, the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), e.g., to command panning, tilting, or optical zoom of the camera 20 as video camera settings. In other embodiments, the controller 50 may process the collected pixel images (arrow 25) in software by cropping, zooming, or enhancing the collected pixel images]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution adjustment features in Wuergler as above, to enhance the dynamic pixel images through camera operations including panning, tilting, and zooming to improve upon possible limitations (Wuergler, Paragraph [0004] & [0022]). Regarding claim 7, Lee, Burch, Wuergler, and Kamiya disclose the method of claim 6, and are analyzed as previously discussed with respect to the claim. Furthermore, Lee discloses comprising: adjusting one or more of the video parameters of the video stream to satisfy the desired accuracy level for each of two or more of the plurality of video analytics algorithms [Pg. 10, tenth to eleventh paragraphs, Specifically, according to the relationship between the average brightness threshold and the average brightness value Lw, it is determined whether the brightness of the current image to be detected is low, and when it is determined that the brightness of the image to be detected is low, a gamma value that increases the brightness of the image to be detected is set]. Regarding claim 8, Lee, Burch, Wuergler, and Kamiya disclose the method of claim 6, and are analyzed as previously discussed with respect to the claim. Furthermore, Kamiya teaches wherein a first one of the two or more of the plurality of video analytics algorithms has a higher priority than a second one of the two or more of the plurality of video analytics algorithms, and adjusting one or more of the video parameters of the video stream comprises adjusting one or more of the video parameters of the video stream to achieve a higher accuracy level for the first one of the two or more of the plurality of video analytics algorithms relative to an accuracy level for the second one of the two or more of the plurality of video analytics algorithms [Paragraph [0028]-[0039], Fig. 2A-2B, plurality of selection patterns (A, B, C, . . . ), as arrangement of priorities for one over other video analytics algorithms, which are combinations of the recognition dictionaries and the image recognition algorithms are defined in the storage section 11. These selection patterns are prepared to provide optimum detection performance for various environmental conditions taking into account their different algorithms and their different assumed environmental conditions]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the multiple image recognition algorithms in Kamiya as above, to increase detection of objects in varying environments with time (Kamiya, Paragraphs [0006]-[0009]). However, Lee, Burch, and Kamiya do not explicitly disclose adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, comprises adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream. Wuergler teaches of adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream that is subsequently provided by the video camera, comprises adjusting one or more of the video camera settings of the video camera, which change one or more of the video parameters of the video stream [Paragraphs [0021]-[0023] & [0036]-[0039], automatic adjustment of the resolution of the dynamic pixel images, as video parameter, the controller 50 may control an operation of the camera 20 using active camera control signals (arrow 35), e.g., to command panning, tilting, or optical zoom of the camera 20 as video camera settings. In other embodiments, the controller 50 may process the collected pixel images (arrow 25) in software by cropping, zooming, or enhancing the collected pixel images]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution adjustment features in Wuergler as above, to enhance the dynamic pixel images through camera operations including panning, tilting, and zooming to improve upon possible limitations (Wuergler, Paragraph [0004] & [0022]). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (CN 111797694 A) (hereinafter Lee), Burch et al. (US 2019/0073747 A1) (hereinafter Burch), and Wuergler et al. (US 2017/0151846 A1) (hereinafter Wuergler) in view of Bataller et al. (US 2016/0350921 A1) (hereinafter Bataller). Regarding claim 21, Lee, Burch, and Wuergler disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim. Furthermore, Burch teaches wherein the set of desired video parameters comprises a desired parameter for achieving the desired accuracy level for the video analytics algorithm [Paragraphs [0034], [0041]-[0043], [0054], data store 420 can store selection parameters, wherein selection parameters include width, height, scaling factor, as desired minimum frame resolution parameter]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement the resolution selection parameters in Burch as above, for scaling, in order to improve the quality of video frames displayed on a display device as video frames are rendered and displayed in an output resolution that is greater than an original output resolution in which the video frames were intended to be rendered and displayed (Burch, Paragraph [0002], [0017] & [0025]). However, Lee, Burch, and Wuergler do not explicitly disclose wherein the set of desired video parameters comprises a desired video camera placement parameter for producing a desired distance and/or angle relative to a subject of interest in the Field of View (FOV) of the video camera for achieving the desired accuracy level for the video analytics algorithm. Bataller teaches wherein the set of desired video parameters comprises a desired video camera placement parameter for producing a desired distance and/or angle relative to a subject of interest in the Field of View (FOV) of the video camera for achieving the desired accuracy level for the video analytics algorithm [Paragraph [0036], [0047] & [0079], Figs. 1 & 6, The system may use the determined intrinsic and extrinsic video camera parameters, as camera mounting height parameter to automatically adjust physical settings of the video camera, including the height of the video camera, to create the desired angle in the field of view 108 of subject 604]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Lee to incorporate and implement camera height adjustment features in Bataller as above, to automatically adjust the height to the calibrated height or generate an alert that informs a user of the video camera that the video camera height needs adjusting (Bataller, Paragraphs [0079]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL CHANG/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

May 17, 2022
Application Filed
Aug 24, 2024
Non-Final Rejection — §103
Nov 21, 2024
Response Filed
Feb 21, 2025
Final Rejection — §103
Apr 18, 2025
Response after Non-Final Action
May 02, 2025
Request for Continued Examination
May 09, 2025
Response after Non-Final Action
Jun 28, 2025
Non-Final Rejection — §103
Oct 01, 2025
Response Filed
Dec 06, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593069
LOW MEMORY DESIGN FOR MULTIPLE REFERENCE LINE SELECTION SCHEME
2y 5m to grant Granted Mar 31, 2026
Patent 12587672
DECOUPLED MODE INFERENCE AND PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574541
IMAGE PROCESSING METHOD AND ASSOCIATED IMAGE PROCESSING CIRCUIT
2y 5m to grant Granted Mar 10, 2026
Patent 12570145
AUTOSTEREOSCOPIC CAMPFIRE DISPLAY
2y 5m to grant Granted Mar 10, 2026
Patent 12574513
METHOD AND DEVICE FOR ENCODING/DECODING VIDEO SIGNAL BY USING OPTIMIZED CONVERSION BASED ON MULTIPLE GRAPH-BASED MODEL
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+13.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month