Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,153

OPTICAL FLOW ESTIMATION METHOD AND APPARATUS

Non-Final OA §103
Filed
Apr 12, 2024
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/06/2025 and 09/25/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 4 is objected to because of the following informalities: “wherein the determining a first optical flow based on the first image frame, the second image frame, and the first event frame”, should be rewritten as “wherein the determining the first optical flow based on the first image frame, the second image frame, and the first event frame” Appropriate correction is required. Claim 5 is objected to because of the following informalities:” wherein the inputting the first image frame, the second image frame, and the first event frame to a preset optical flow estimation model to obtain the first optical flow comprises:”, should be rewritten as “wherein the inputting the first image frame, the second image frame, and the first event frame to the preset optical flow estimation model to obtain the first optical flow comprises:” Appropriate correction is required. Claim 6 is objected to because of the following informalities: “wherein the determining a first optical flow allocation mask based on the second event frame comprises:”, should be rewritten as “wherein the determining the first optical flow allocation mask based on the second event frame comprises:” Appropriate correction is required. Claim 7 is objected to because of the following informalities: “wherein the inputting the second event frame to a preset optical flow allocation model to obtain the first optical flow allocation mask comprises:” should be rewritten “wherein the inputting the second event frame to the preset optical flow allocation model to obtain the first optical flow allocation mask comprises:” as Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 9-11 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bendikian et al. US Patent(US 10600189 B1) in view of Liu et al. (CN 109922372A, as cited by applicant in IDS filed 01/06/2025). Regarding Claim 1, Bedikian teaches an optical flow estimation method(Col 1, Lines 65-67, This disclosure is directed to systems, methods, and computer readable media for utilizing optical flow from an event camera), comprising: obtaining a first image frame and a second image frame and the image sequence is obtained by photographing a target scene (See Col. 4, lines 32-37 and col. 5, lines 46-53 where a regular camera captures image frames at a frame rate.), obtaining a first event frame(FIG. 4 shows a depiction of event frames 405, 410, and 415 over time), wherein the first event frame is used to describe a luminance change of the target scene within a time period from the first image frame to the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, as disclosed in this section, an event stream is received and brightness changes are calculated and a timestamp of the change is recorded.); and determining a target optical flow based on the first image frame, the second image frame, and the first event frame(Col 4, Lines 55-58,” subset of pixels may correspond to pixels in the image within which the object or feature of the object was detected in 210. According to one or more embodiments, tracking movement within a small number of pixels of an image may provide preferable results if there are multiple objects moving in a scene. That is, a smaller subset of pixels will likely result in less false matches in an event flow.”, as disclosed in this section of the prior art, the optical flow is calculated using multiple frames ), wherein the target optical flow is an optical flow from the first image frame to a target moment, and the target moment is any moment between the first image frame and the second image frame (Col. 5, Lines 44-58 disclose the system can use the optical flow to place the ball at any location in the 60 frames/sec video data based on the determined optical flow.). Bedikian does not explicitly teach wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence Liu teaches wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence (Abstract, “obtaining the first frame rate of the first video data stream; motion data obtaining time of continuous two frame intermediate time the video data in the first video data stream according to the time to obtain all events between the continuous two frames of video data, a second video data stream according to the first video data stream and said motion data”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Bedikian with Lin in order to use adjacent image frames in the image sequence. One skilled in the art would have been motivated to modify Bedikian in this manner in order to acquire all motion data between two frames of a low-frame-rate common camera in a fast motion scene. (Liu, Page 9, Second to Last Paragraph) Regarding Claim 2, the combination of Bedikian and Liu teach the method according to claim 1, where Bedikian further teaches wherein before the determining a target optical flow based on the first image frame, the second image frame, and the first event frame, (Col 4, Lines 1-6, “feature detection module is configured to identify a feature in an image. In one or more embodiments, feature detection module 155 may detect a feature of an object in an image captured by a traditional camera, or based on an event stream, or both. The feature detection module 155 may, for example, detect a feature, identify feature descriptors for the feature, and identify the feature based on the descriptors.”, this section of the prior art discloses determining an object in the event stream before calculating optical flow.) the method further comprises: obtaining a second event frame, wherein the second event frame is used to describe a luminance change of the target scene within a time period from the first image frame to the target moment(Col 6, Lines 50-61, “FIG. 4 shows a depiction of event frames 405, 410, and 415 over time. It should be understood that the various event frames are not actual frames captured by a camera. Rather, the depicted event frames 405, 410, and 415 depict, for each of a subset of pixels of an image, whether a brightness has changed at a particular time. According to one or more embodiments, the various subset of pixels may be expressed as a collection of data sets indicating a timestamp corresponding to a change in brightness of a particular pixel at a particular location. In one or more embodiments, pixel locations may be defined by pixel coordinates within an image or on a sensor.”, in this section of the prior art multiple event frames are acquired and processed to determine a change in brightness); and wherein the determining a target optical flow based on the first image frame, the second image frame, and the first event frame comprises: determining the target optical flow based on the first image frame, the second image frame, the first event frame, and the second event frame. (Col 7, Lines 33-48, “FIG. 5 shows, in chart form, an example dynamic event flow used to determine a velocity of an object, according to one or more embodiments. More specifically, FIG. 5 depicts an example of an event flow over a period of time T0-T3 during which the subset of pixels is dynamically modified. As shown, event frame 505 includes nine pixels, with a change in brightness detected at three pixels. For purposes of this example, it could be determined that the initial set of pixels is insufficient. Thus, at 510, a set of 16 pixels is considered at T2. The same subset of pixels is also considered at 515, where there is an apparent movement of the indication of a change in brightness. Then, at 520, a different subset of pixels is considered. According to one or more embodiments, the apparent direction of the change in brightness may indicate that the nine pixels at the left edge and bottom edge are no longer needed.”, as shown in figure 5, multiple event frames are used from a time period of t0-t3 to calculate the velocity of an object) Regarding Claim 9, the combination of Bedikian and Liu teach the method according to claim 1, where Bedikian further teaches wherein the obtaining a first event frame comprises: obtaining event flow data(Col 4, Lines 10-13, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object”, Col 4, Lines 10-13 disclose receiving and event stream to determine motion in the stream.), wherein the event flow data comprises event data of each event in at least one event, the at least one event one-to-one corresponds to at least one luminance change that occurs in the target scene between the first image frame and the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, Col 4, Lines 10-19 disclose determining motion from the event stream by detecting brightness changes from one time period to another and marking the time stamp of when the event occurred.);, and the event data of each event comprises a timestamp, pixel coordinates, and a polarity(Col 3, Lines 15-19, “Each event may include pixel coordinates for a pixel at which the event is detected, a timestamp at which the event is detected, and a polarity which indicates a direction in change of brightness.”, Col 3, Lines 15-19 disclose the event data comprises, pixel coordinates, timestamp and a polarity.); and obtaining the first event frame based on the event flow data (Col 6, Lines 53-55, “the depicted event frames 405, 410, and 415 depict, for each of a subset of pixels of an image, whether a brightness has changed at a particular time”, Col 6, Lines 53-55 disclose obtaining multiple event frames to determine brightness change from the event stream data.) Regarding Claim 10, Bedikian teaches an optical flow estimation apparatus(Fig. 1), comprising: at least one processor(Fig. 1, Element CPU 130); and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to cause the optical flow estimation apparatus(Col 3, Lines 37-45, Electronic Device 100 may include a central processing unit (CPU) 130. Processor 130 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs). Further processor 130 may include multiple processors of the same or different type. Electronic Device 100 may also include a memory 140. Memory 140 may each include one or more different types of memory, which may be used for performing device functions in conjunction with CPU 130.) to: obtain a first image frame and a second image frame and the image sequence is obtained by photographing a target scene (See Col. 4, lines 32-37 and col. 5, lines 46-53 where a regular camera captures image frames at a frame rate.), obtain a first event frame(FIG. 4 shows a depiction of event frames 405, 410, and 415 over time), wherein the first event frame is used to describe a luminance change of the target scene within a time period from the first image frame to the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, as disclosed in this section, an event stream is received and brightness changes are calculated and a timestamp of the change is recorded.); and determine a target optical flow based on the first image frame, the second image frame, and the first event frame(Col 4, Lines 55-58,” subset of pixels may correspond to pixels in the image within which the object or feature of the object was detected in 210. According to one or more embodiments, tracking movement within a small number of pixels of an image may provide preferable results if there are multiple objects moving in a scene. That is, a smaller subset of pixels will likely result in less false matches in an event flow.”, as disclosed in this section of the prior art, the optical flow is calculated using multiple frames ), wherein the target optical flow is an optical flow from the first image frame to a target moment, and the target moment is any moment between the first image frame and the second image frame (Col. 5, Lines 44-58 disclose the system can use the optical flow to place the ball at any location in the 60 frames/sec video data based on the determined optical flow.). Bedikian does not explicitly teach wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence Liu teaches wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence (Abstract, “obtaining the first frame rate of the first video data stream; motion data obtaining time of continuous two frame intermediate time the video data in the first video data stream according to the time to obtain all events between the continuous two frames of video data, a second video data stream according to the first video data stream and said motion data”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Bedikian with Lin in order to use adjacent image frames in the image sequence. One skilled in the art would have been motivated to modify Bedikian in this manner in order to acquire all motion data between two frames of a low-frame-rate common camera in a fast motion scene. (Liu, Page 9, Second to Last Paragraph) Regarding Claim 11, the combination of Bedikian and Liu teach the apparatus according to claim 10, where Bedikian further teaches wherein the programming instructions, when executed by the at least one processor, cause the optical flow estimation apparatus to: before the target optical flow is determined based on the first image frame, the second image frame, and the first event frame, (Col 4, Lines 1-6, “feature detection module is configured to identify a feature in an image. In one or more embodiments, feature detection module 155 may detect a feature of an object in an image captured by a traditional camera, or based on an event stream, or both. The feature detection module 155 may, for example, detect a feature, identify feature descriptors for the feature, and identify the feature based on the descriptors.”, this section of the prior art discloses determining an object in the event stream before calculating optical flow.) the method further comprises: obtain a second event frame, wherein the second event frame is used to describe a luminance change of the target scene within a time period from the first image frame to the target moment(Col 6, Lines 50-61, “FIG. 4 shows a depiction of event frames 405, 410, and 415 over time. It should be understood that the various event frames are not actual frames captured by a camera. Rather, the depicted event frames 405, 410, and 415 depict, for each of a subset of pixels of an image, whether a brightness has changed at a particular time. According to one or more embodiments, the various subset of pixels may be expressed as a collection of data sets indicating a timestamp corresponding to a change in brightness of a particular pixel at a particular location. In one or more embodiments, pixel locations may be defined by pixel coordinates within an image or on a sensor.”, in this section of the prior art multiple event frames are acquired and processed to determine a change in brightness); and wherein the determine a target optical flow based on the first image frame, the second image frame, and the first event frame comprises: determining the target optical flow based on the first image frame, the second image frame, the first event frame, and the second event frame. (Col 7, Lines 33-48, “FIG. 5 shows, in chart form, an example dynamic event flow used to determine a velocity of an object, according to one or more embodiments. More specifically, FIG. 5 depicts an example of an event flow over a period of time T0-T3 during which the subset of pixels is dynamically modified. As shown, event frame 505 includes nine pixels, with a change in brightness detected at three pixels. For purposes of this example, it could be determined that the initial set of pixels is insufficient. Thus, at 510, a set of 16 pixels is considered at T2. The same subset of pixels is also considered at 515, where there is an apparent movement of the indication of a change in brightness. Then, at 520, a different subset of pixels is considered. According to one or more embodiments, the apparent direction of the change in brightness may indicate that the nine pixels at the left edge and bottom edge are no longer needed.”, as shown in figure 5, multiple event frames are used from a time period of t0-t3 to calculate the velocity of an object) Regarding Claim 18, the combination of Bedikian and Liu teach the apparatus according to claim 10, where Bedikian further teaches wherein the programming instructions, when executed by the at least one processor, cause the optical flow estimation apparatus to: obtain event flow data(Col 4, Lines 10-13, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object”, Col 4, Lines 10-13 disclose receiving and event stream to determine motion in the stream.), wherein the event flow data comprises event data of each event in at least one event, the at least one event one-to-one corresponds to at least one luminance change that occurs in the target scene between the first image frame and the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, Col 4, Lines 10-19 disclose determining motion from the event stream by detecting brightness changes from one time period to another and marking the time stamp of when the event occurred.);, and the event data of each event comprises a timestamp, pixel coordinates, and a polarity(Col 3, Lines 15-19, “Each event may include pixel coordinates for a pixel at which the event is detected, a timestamp at which the event is detected, and a polarity which indicates a direction in change of brightness.”, Col 3, Lines 15-19 disclose the event data comprises, pixel coordinates, timestamp and a polarity.); and obtaining the first event frame based on the event flow data (Col 6, Lines 53-55, “the depicted event frames 405, 410, and 415 depict, for each of a subset of pixels of an image, whether a brightness has changed at a particular time”, Col 6, Lines 53-55 disclose obtaining multiple event frames to determine brightness change from the event stream data.) Regarding Claim 19, Bedikian teaches a non-transitory computer-readable storage media comprising instructions which(Col 1, Lines 38-40, the method may be embodied in computer executable program code and stored in a non-transitory storage device), when executed by one or more processors, cause the one or more processors to perform operations(Col 3, Lines 37-45, Electronic Device 100 may include a central processing unit (CPU) 130. Processor 130 may be a system-on-chip such as those found in mobile devices and include one or more dedicated graphics processing units (GPUs). Further processor 130 may include multiple processors of the same or different type. Electronic Device 100 may also include a memory 140. Memory 140 may each include one or more different types of memory, which may be used for performing device functions in conjunction with CPU 130.) comprising: obtaining a first event frame(FIG. 4 shows a depiction of event frames 405, 410, and 415 over time), wherein the first event frame is used to describe a luminance change of the target scene within a time period from the first image frame to the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, as disclosed in this section, an event stream is received and brightness changes are calculated and a timestamp of the change is recorded.); and determining a target optical flow based on the first image frame, the second image frame, and the first event frame(Col 4, Lines 55-58,” subset of pixels may correspond to pixels in the image within which the object or feature of the object was detected in 210. According to one or more embodiments, tracking movement within a small number of pixels of an image may provide preferable results if there are multiple objects moving in a scene. That is, a smaller subset of pixels will likely result in less false matches in an event flow.”, as disclosed in this section of the prior art, the optical flow is calculated using multiple frames ), wherein the target optical flow is an optical flow from the first image frame to a target moment, and the target moment is any moment between the first image frame and the second image frame (Col. 5, Lines 44-58 disclose the system can use the optical flow to place the ball at any location in the 60 frames/sec video data based on the determined optical flow.). Bedikian does not explicitly teach wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence Liu teaches wherein the first image frame and the second image frame are any two adjacent image frames in an image sequence (Abstract, “obtaining the first frame rate of the first video data stream; motion data obtaining time of continuous two frame intermediate time the video data in the first video data stream according to the time to obtain all events between the continuous two frames of video data, a second video data stream according to the first video data stream and said motion data”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Bedikian with Lin in order to use adjacent image frames in the image sequence. One skilled in the art would have been motivated to modify Bedikian in this manner in order to acquire all motion data between two frames of a low-frame-rate common camera in a fast motion scene. (Liu, Page 9, Second to Last Paragraph) Regarding Claim 20, the combination of Bedikian and Liu teach the non-transitory computer-readable storage media according to claim 19, where Bedikian further teaches wherein the obtaining a first event frame comprises: obtaining event flow data(Col 4, Lines 10-13, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object”, Col 4, Lines 10-13 disclose receiving and event stream to determine motion in the stream.), wherein the event flow data comprises event data of each event in at least one event, the at least one event one-to-one corresponds to at least one luminance change that occurs in the target scene between the first image frame and the second image frame(Col 4, Lines 10-19, “the motion detection module 160 may receive an event stream from an event camera 110, and analyze a pattern of changes in brightness over time to determine motion of an object. The motion detection module 160 may analyze the changes in brightness over a subset of pixels, such as pixels associate with a particular object or feature of an object, to determine a velocity of the object. That is, the event flow may provide indications of a change in brightness at a particular pixel along with a timestamp identifying when the event (i.e., change in brightness) occurred. By analyzing a subset of pixels in an image, the motion of a particular feature may be calculated.”, Col 4, Lines 10-19 disclose determining motion from the event stream by detecting brightness changes from one time period to another and marking the time stamp of when the event occurred.);, and the event data of each event comprises a timestamp, pixel coordinates, and a polarity(Col 3, Lines 15-19, “Each event may include pixel coordinates for a pixel at which the event is detected, a timestamp at which the event is detected, and a polarity which indicates a direction in change of brightness.”, Col 3, Lines 15-19 disclose the event data comprises, pixel coordinates, timestamp and a polarity.); and obtaining the first event frame based on the event flow data (Col 6, Lines 53-55, “the depicted event frames 405, 410, and 415 depict, for each of a subset of pixels of an image, whether a brightness has changed at a particular time”, Col 6, Lines 53-55 disclose obtaining multiple event frames to determine brightness change from the event stream data.) Allowable Subject Matter Claims 3-8 and 12-17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding Claim 3, the primary reason for the allowance of the claim is the inclusion of the limitations, “determining a first optical flow allocation mask based on the second event frame, wherein the first optical flow allocation mask indicates a weight of the target optical flow relative to the first optical flow; and determining the target optical flow based on the first optical flow and the first optical flow allocation mask.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claim. Also, it is noted that the quoted limitations, in combination with the other claim limitation of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves. Claims 4-7 would be allowed by virtue of their dependency on Claim 3. Regarding Claim 8, the primary reason for the allowance of the claim is the inclusion of the limitations, “wherein the first image frame comprises HxW pixels, both H and W are integers greater than 1, the first event frame comprises a plurality of channels, and the plurality of channels comprise a first channel, a second channel, a third channel, and a fourth channel; wherein the first channel comprises HxW first values, whcrcin the HxW first values one-to- one correspond to HxW locations of the HxW pixels, and each first value indicates a quantity of times that luminance of a pixel at a corresponding location in the first image frame increases within the time period from the first image frame to the second image frame; wherein the second channel comprises HxW second values, whcrcin the HxW second values one-to-one correspond to the HxW locations of the HxW pixels, and each second value indicates a quantity of times that luminance of a pixel at a corresponding location in the first image frame decreases within the time period from the first image frame to the second image frame; wherein the third channel comprises HxW third values, whcrcin the HxW third values one- to-one correspond to the HxW locations of the HxW pixels, and each third value indicates a timestamp at which luminance of a pixel at a corresponding location in the first image frame increases for the last time within the time period from the first image frame to the second image frame; and wherein the fourth channel comprises HxW fourth values, wherein the HxW fourth values one-to-one correspond to the HxW locations of the HxW pixels, and each fourth value indicates a timestamp at which luminance of a pixel at a corresponding location in the first image frame decreases for the last time within the time period from the first image frame to the second image frame.” , in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claim. Also, it is noted that the quoted limitations, in combination with the other claim limitation of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves. Regarding Claim 12, the primary reason for the allowance of the claim is the inclusion of the limitations, “determine a first optical flow allocation mask based on the second event frame, wherein the first optical flow allocation mask indicates a weight of the target optical flow relative to the first optical flow; and determine the target optical flow based on the first optical flow and the first optical flow allocation mask.”, in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claim. Also, it is noted that the quoted limitations, in combination with the other claim limitation of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves. Regarding Claim 13-16 would be allowed by virtue of their dependency on Claim 12. Regarding Claim 17, the primary reason for the allowance of the claim is the inclusion of the limitations “wherein the first image frame comprises HxW pixels, both H and W are integers greater than 1, the first event frame comprises a plurality of channels, and the plurality of channels comprise a first channel, a second channel, a third channel, and a fourth channel; wherein the first channel comprises HxW first values, wherein the HxW first values one-to- one correspond to HxW locations of the HxW pixels, and each first value indicates a quantity of times that luminance of a pixel at a corresponding location in the first image frame increases within the time period from the first image frame to the second image frame; wherein the second channel comprises HxW second values, wherein the HxW second values one-to-one correspond to the HxW locations of the HxW pixels, and [[the]]each second value indicates a quantity of times that luminance of a pixel at a corresponding location in the first image frame decreases within the time period from the first image frame to the second image frame; wherein the third channel comprises HxW third values, wherein the HxW third values one- to-one correspond to the HxW locations of the HxW pixels, and each third value indicates a timestamp at which luminance of a pixel at a corresponding location in the first image frame increases for the last time within the time period from the first image frame to the second image frame; and wherein the fourth channel comprises HxW fourth values, wherein the HxW fourth values one-to-one correspond to the HxW locations of the HxW pixels, and each fourth value indicates a timestamp at which luminance of a pixel at a corresponding location in the first image frame decreases for the last time within the time period from the first image frame to the second image frame.” in the claim which is not found in the prior art references. It is noted that the examiner has not found any other prior art to anticipate or obviate the quoted claim limitations supra, when read in light/combination of the other claimed limitations within the cited claim. Also, it is noted that the quoted limitations, in combination with the other claim limitation of the cited claim, deem the claim patentable, not just the consideration of the quoted limitations by themselves. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jiao et al. US PG-Pub(US 20220417590 A1) disclose capturing optical flow information from a frame in the video data acquired in ¶[0067] and shown in a flowchart in figure 1. Kamboj et al. US PG-Pub(US 20220222829 A1) discloses determining a motion vector of two frames and obtaining a segmentation mask of a previous frame in ¶[0067] Daniilidis et al. US PG-Pub(US 20200265590 A1) discloses using a CNN to predict optical flow from event camera images in ¶[0002] and the model is shown in figure 2. Li et al. (“A Lightweight Network to Learn Optical Flow from Event Data”) discloses a Lightweight Network to Learn Optical Flow from Event Data in the abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month