DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s response to the last Office Action, filed 6/30/2025, has been entered and made of record.
Applicant has amended claims 1,14 and 21. Claims 7 and 20 are cancelled. Claims 1-6,8-19, and 21 are currently pending.
Applicants arguments filed 9/30/2025 have been fully considered but they are not persuasive.
Applicant argues that none of the reference teaches “wherein identifying the motion of interest further comprises shutting down the virtual stream if the identified motion of interest is stationary for a period of time in the virtual stream. “ Examiner refers to tang et al that teaches “The computer vision operations may be configured to detect characteristics of the detected objects, behavior of the objects detected, a movement direction of the objects detected and/or a liveness of the objects detected………...The behavior and/or liveness may be determined in response to the type of object and/or the characteristics of the objects detected” (paragraph [0175]) Tang et al teaches the motion sensor of the sensors 164 may be configured to remain on (e.g., always active) unless disabled in response to feedback from the processor/SoC 102.Notet hat the processor/SoC depends on the computer vision that detects the characteristic of the detected objected in this case in can be that it is stationary for a period of time.. The video analytics performed by the processor/SoC 102 may have a relatively large drain on the battery 152 (e.g., greater than the motion sensor 164). In an example, the processor/SoC 102 may be in a low-power state (or power-down) until some motion is detected by the motion sensor of the sensors 164. Additionally , Tang et al teaches the motion sensor of the sensors 164 may be configured to remain on (e.g., always active) unless disabled ( shutting down) ,in response to feedback from the processor/SoC 102. The video analytics performed by the processor/SoC 102 may have a relatively large drain on the battery 152 (e.g., greater than the motion sensor 164). In an example, the processor/SoC 102 may be in a low-power state (or power-down) until some motion is detected by the motion sensor of the sensors 164 ( paragraph [0079]). Tang et al. teaches the video frame 350 may have been generated and analyzed/evaluated by the dynamic AI metering components 300 ( note that the analyzation includes whether the identified motion of interest is stationary for a period of time), the AI metering technique may have calculated the AE parameters in response to the example video frame 350, and after the capture device 104 has been adjusted according to the calculated AE parameters, the adjusted video frame 450 may have been captured. In the example shown, the adjusted video frame 450 may correspond to a time when the AI ROI metering control has taken effect and the AE becomes stable(paragraph[0202-0203]). Additionally, examiner used a secondary reference Williams et al that teaches a cut-out, which is a zoomed-in and perspective-corrected portion of each video frame, is created, and the portion of each video frame which is defined by the cut-out is fed at the predetermined frame rate to a device such as a smartphone, tablet or television for viewing by the user. The image viewed on this device mimics that which would have been captured by a real camera with a yaw, pitch and zoom adjusted so as to capture a portion of the scene of the sporting event which has been captured in full in the video recording, paragraph [0091]). All remaining arguments are reliant on the aforementioned and addressed arguments and thus are considered to be wholly addressed herein.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Tang et al (US 2023/0419505) in view of Williams et al (US 20180018510).
As to claim 1, Tang et al teaches the method for monitoring motion in a video stream, the method comprising:
identifying a motion of interest from motion present in a virtual stream(The computer vision AI models may be configured to detect various objects, subjects and/or events of interest in the YUV images generated from the video frames 302a-302n, paragraph [0138]), the virtual stream covering an area of the video stream at which motion is present the computer vision AI model implemented by the ROI detection module 306 may be configured to motion detection, paragraph [0138]),
the motion of interest being a motion that is targeted for monitoring, wherein the area of the video stream being covered is controllable by a pan tilt value of the virtual stream (the processor/logic 182 may adjust an exposure of the image sensor 180 in response to the signal AE_PARAM. In another example, the processor/logic 182 may adjust a DC iris and/or a shutter time for the image sensor 180 in response to the signal AE_PARAM. In yet another example, the processor/logic 182 may adjust a zoom/tilt/pan/focus of the capture device 104 in response to the signal AE_PARAM, paragraph [0068]); wherein identifying the motion of interest further comprises shutting down the virtual stream if the identified motion of interest is stationary for a period of time in the virtual stream ( Tang et al teaches the motion sensor of the sensors 164 may be configured to remain on (e.g., always active) unless disabled in response to feedback from the processor/SoC 102.Notet hat the processor/SoC depends on the computer vision that detects the characteristic of the detected objected in this case in can be that it is stationary for a period of time.. The video analytics performed by the processor/SoC 102 may have a relatively large drain on the battery 152 (e.g., greater than the motion sensor 164). In an example, the processor/SoC 102 may be in a low-power state (or power-down) until some motion is detected by the motion sensor of the sensors 164. Additionally , Tang et al teaches the motion sensor of the sensors 164 may be configured to remain on (e.g., always active) unless disabled ( shutting down) ,in response to feedback from the processor/SoC 102. The video analytics performed by the processor/SoC 102 may have a relatively large drain on the battery 152 (e.g., greater than the motion sensor 164). In an example, the processor/SoC 102 may be in a low-power state (or power-down) until some motion is detected by the motion sensor of the sensors 164 ( paragraph [0079]). Tang et al. teaches the video frame 350 may have been generated and analyzed/evaluated by the dynamic AI metering components 300 ( note that the analyzation includes whether the identified motion of interest is stationary for a period of time), the AI metering technique may have calculated the AE parameters in response to the example video frame 350, and after the capture device 104 has been adjusted according to the calculated AE parameters, the adjusted video frame 450 may have been captured. In the example shown, the adjusted video frame 450 may correspond to a time when the AI ROI metering control has taken effect and the AE becomes stable(paragraph[0202-0203]).
While Tang et al teaches the limitation above, Tang fails to teach “adjusting the pan tilt value of
the virtual stream to position the motion of interest at a center of the virtual stream. “
Specifically, Williams et al teaches in FIG. 10, each x position and y position is recorded in meters (m) with respect to the Centre 1008 of the soccer pitch 1010, the Centre 1008 forming the origin of the (x, y) coordinate system within which the position of “Player 1” is recorded. The “Player 1” positions 1000, 1002 and 1004 are illustrated on the soccer pitch 1010 in FIG. 10. Tracking and recording the (x, y) position of a soccer player on a soccer pitch as shown in FIG. 10 may be carried out using any method known in the art. At each recorded position of “Player 1”, the yaw, pitch and zoom of the virtual camera used by the video clip generator 710 to generate the cut-out may be adjusted on the basis of a mapping between the pitch coordinates (x, y) and the yaw, pitch and zoom so that “Player 1” is featured in the output cut-out image. Such a mapping may be carried out on the basis of any suitable technique known in the art. For example, the same technique as described above for the determination of the pan (yaw), pitch (tilt) and zoom of a real camera in response to the determination of the (x, y) position of a player on a soccer pitch can be used (although this time, of course, it is the yaw, pitch and zoom of a virtual camera which is determined). As mentioned above, the cut-out image is a zoomed-in, perspective corrected portion of the frames of the video recording, and thus produces an image which mimics that of a real camera following “Player 1” ( paragraph [0098-0099]). It would have been obvious to one skilled in the art before filing of the claimed invention to adjust at the center and use the calibration technique of Williams et al in order to provide interactive review and analysis capability of the video clips in the presentation. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
As to claim 2, Williams et al teaches the method for monitoring motion in the video stream comprising detecting the area at which motion is present in the video stream by disregarding areas of the video stream that are stationary and generating the virtual stream to cover an area of the video stream that is moving (That is, a cut-out, which is a zoomed-in and perspective-corrected portion of each video frame, is created, and the portion of each video frame which is defined by the cut-out is fed at the predetermined frame rate to a device such as a smartphone, tablet or television for viewing by the user. The image viewed on this device mimics that which would have been captured by a real camera with a yaw, pitch and zoom adjusted so as to capture a portion of the scene of the sporting event which has been captured in full in the video recording, paragraph [0091]).
As to claim 3, Williams et al teaches the method for monitoring motion in the video stream wherein adjusting the pan tilt value of the virtual stream further comprises determining a coordinate corresponding to a center of the motion of interest in the video stream, and adjusting the pan tilt value of the virtual stream to position the center of the virtual stream at the determined coordinate(paragraph [0098-0100][0109]).
As to claim 4, Williams et al teaches the method for monitoring motion in the video stream wherein adjusting the pan tilt value of the virtual stream is done in real time (A cut-out (or virtual camera view) of the video frames can be created which mimics the output of a broadcast video camera capturing the image in real-time.; paragraph [0091]).
As to claim 5, Tang et al teaches the method for monitoring motion in the video stream according to claim 1, wherein identifying the motion of interest further comprises shutting down the virtual stream if the motion of interest is not identified in the virtual stream(paragraph[0178-0180]).
As to claim 6, Tang et al teaches the method for monitoring motion in the video stream according to claim 1, wherein identifying the motion of interest further comprises shutting down the virtual stream if a plurality of motion of interest are identified in the virtual stream, and generating a plurality of new virtual streams, each of the new virtual streams covering an area of the video stream at which each of the plurality of motion of interest is present (] In the adjusted video frame 450, the ROIs 370a-370b and the new ROI 454 may have the ROI information. The ROI 370a may have the same ROI ID 380a of zero, the ROI 370b may have the same ROI ID 380b of one and the new ROI 454 may have the next value for the ROI ID 380c of two, paragraph [0205-0206]).
As to claim 8, Tang et al teaches the method for monitoring motion in the video stream according to claim 1, further comprising detecting whether the identified motion of interest is moving and deviating from a previous position, and regenerating the virtual stream in response to the detection(paragraph [0079-0080];[202-0204]).
As to claim 9, Tang et al teaches the method for monitoring motion in the video stream each with a frame of a predetermined height and width, each virtual stream covering one or more areas of the video stream at which motion is present (The characteristics of the objects may comprise a height, length, width, slope, an arc length, a color, an amount of light emitted, detected text on the object, a path of movement, a speed of movement, a direction of movement, a proximity to other objects, etc. The characteristics of the detected object may comprise a status of the object (e.g., opened, closed, on, off, etc.). The behavior and/or liveness may be determined in response to the type of object and/or the characteristics of the objects detected. While one example video frame 350 is shown, the behavior, movement direction and/or liveness of an object may be determined by analyzing a sequence of the video frames 302a-302n captured over time, paragraph [0175]).
As to claim 10, Tang et al teaches detecting a plurality of areas at which motion is present in the video stream; dividing the plurality of areas into one or more groups, each group comprising at least two areas, wherein the at least two areas occupy a total height and width in the video stream that do not exceed the predetermined height and width of the frame of the virtual stream respectively; and generating each of the one or more virtual streams to cover each of the one or more groups ( paragraph [0175-0176]).
As to claim 11, Williams et al teaches generating each of the one or more virtual streams further comprises: determining a coordinate for a center of each of the plurality of areas; calculating an average center coordinate for each group based on an average of the determined coordinate of each of the at least two areas in each group; wherein a center of the frame of each generated virtual stream is positioned at the average center coordinate for an associated group (paragraph [0170][0178]).
As to claim 12, Williams et al teaches method for monitoring motion in the video stream according to (Original) The method for monitoring motion in the video stream according to further comprising updating the average center coordinate for each group in real time, and adjusting the pan tilt value of each virtual stream to position the center of the frame at the updated average center coordinate (paragraph [0178-0183]).
[Claim 13] (Original) The method for monitoring motion in the video stream according to (Original) The method for monitoring motion in the video stream according to wherein generating the one or more virtual streams further comprises adjusting the pan tilt value of each virtual stream to position each frame within a boundary of the video stream (paragraph[0053] [0099-0100][0113])
Claims 14-19 and 21 has been addressed above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664