Prosecution Insights
Last updated: April 19, 2026
Application No. 18/620,728

PROCESSING DEVICE, SYSTEM, PROCESSING METHOD, AND APPARATUS

Non-Final OA §103§112
Filed
Mar 28, 2024
Examiner
FATIMA, UROOJ
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-060319, filed on 04/03/2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/28/2024 and 09/26/2024 have been considered by the examiner. Status of Claims Claims 1-16 are pending in this application. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim 6 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 4 (despite slight difference in wording). Claim 7 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 5 (despite slight difference in wording). Claim 15 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 13 (despite slight difference in wording). Claim 16 is objected to under 37 CFR 1.75 as being a substantial duplicate of claim 14 (despite slight difference in wording). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “optical device” in claim 8 “a control device” in claim 8 “a signal processing device” in claim 8 “a display device” in claim 8 “a storage device” in claim 8 “a mechanical device” in claim 8 “a capturing device” in claim 9 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Claim 8: “optical device” corresponds to figure 9, element 940 “Examples of the optical device 940 include a lens, a shutter, and a mirror.” (Application Pub., Paragraph [0063]). Claim 8: “a control device” corresponds to figure 9, element 950 “The control device 950 is a semiconductor device such as an ASIC.” (Application Pub., Paragraph [0063]). Claim 8: “a signal processing device” corresponds to figure 9, element 960 “The processing device 960 processes signals output from the semiconductor device 930. The processing device 960 is a semiconductor device such as a CPU or an ASIC for constituting an AFE (Analog Front End) or a DFE (Digital Front End).” (Application Pub., Paragraph [0064]). Claim 8: “a display device” corresponds to figure 9, element 970 “The display device 970 is an EL display device or a liquid crystal display device that displays information (image) obtained by the semiconductor device 930.” ).” (Application Pub., Paragraph [0064]). Claim 8: “a storage device” corresponds to figure 9, element 980 “The storage device 980 is a magnetic device or a semiconductor device for storing information (image) obtained by the semiconductor device 930. The storage device 980 is a volatile memory such as SRAM or DRAM, or a non-volatile memory such as a flash memory or a hard disk drive.” (Application Pub., Paragraph [0064]). Claim 8: “a mechanical device” corresponds to figure 9, element 990 “The mechanical device 990 includes a movable unit or a driving unit such as a motor and an engine.” (Application Pub., Paragraph [0064]). Claim 9: “a capturing device” corresponds to figure 8, element 31 “The image capturing device 31 includes an event sensor unit 311, a frame sensor unit 312, and a transmission unit 313. The event sensor unit 311 has the functions of the event data obtaining unit 11 and the motion detection unit 131” (Application Pub, Paragraph [0058]. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “generate second frame data …corresponding to a time later than a time to which the first frame data corresponds”. It is unclear how the later time is determined, is “a time later than a time” a specific point in time or an interval relative to an earlier time? Appropriate correction is required. For purposes of examination Examiner is interpreting “time later than a time” as an interval relative to an earlier time. Claims 3, 9, 10, and 12 recite a similar limitation and are rejected for the same reasons. Claims 2, 4, 5-8, 11, 13-16 are rejected for the same reasons, due to their dependency. Positive Statement regarding 35 U.S.C. 101: Claims 1-16 are determined to be eligible under 35 U.S.C. 101. The claim 1, for example, at lines 8-10 recites “generate second frame data from the first frame data and the motion of the object, the second frame data corresponding to a time later than a time to which the first frame data corresponds.”. It is given the weight of the description in the specification page 5 that generating the second frame data is not simply computing the frame data using the first frame data and the motion of the object. Page 2, paragraph 24 “the processing unit 13 is capable of detecting motion of the object at an interval shorter than the frame period by using event data, and capable of obtaining motion information at an interval shorter than the frame period. The processing unit 13 can predict a future frame with high accuracy by using motion information obtained by detecting motion of the object at an interval shorter than the frame period, i.e., at a high time resolution. Specifically, the processing unit 13 can generate predictive frame data in which detailed motion of the object between frames is reflected.”. Therefore, it seems that the “generate second frame data from the first frame data and the motion of the object, the second frame data corresponding to a time later than a time to which the first frame data corresponds.” in combination with other limitation/features in the claim, claim as a whole, makes it eligible under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3 and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Burns et al. (US 2018/0098082 A1) (hereinafter, “Burns”) in view of Honda et al. (US 2022/0030185 A1) (hereinafter, “Honda”). Regarding claim 1, Burns discloses processing device comprising (Paragraph [0013] “the disclosed techniques can be implemented, for example, in a computing system or a graphics processing system”): a first obtaining circuit (Figure 3 element 304, video signal processor, equates to first obtaining unit) configured to obtain first frame data (image frames in paragraph [0026] equates to first frame data) that is frame data of an image of an object at a predetermined interval (Paragraph [0026] “The video signal processor 304 is configured to receive a sequence of image frames generated by a frame-based video camera 104 at a frame sampling rate (or period).”); a second obtaining circuit (Figure 3 element 302, event signal processor, equates to second obtaining unit) configured to obtain event data that is obtainable at an interval shorter than the predetermined interval (high temporal resolution in paragraph [0025] equates to an interval shorter than…) and is a detection result of a change in a pixel value of the object (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”); and a processor configured to detect motion of the object (image dynamics in paragraph [0017] equates to motion of the object) at an interval shorter than the predetermined interval by using the event data (Paragraph [0017] “the event-based camera may employ a dynamic (or differential) vision sensor (DVS) to record the image dynamics. As such, events associated with changes in any pixel can be captured with a much greater temporal resolution than is possible with a frame-based video camera.”; Paragraph [0028] “event integration circuit 306 is configured to integrate a subset (or all) of the sequence of pixel events, occurring within the frame sampling period between pairs of captured image frames. The integration is employed to generate a pixel motion vector representing motion of the pixel between those frames”.), and generate second frame data (the generated interpolated video frames in paragraph [0028] equates to second video frame) from the first frame data and the motion of the object (Paragraph [0028] “insert a new interpolated frame between two existing frames (e.g., at a 2× up-convert rate), the pixel events may be integrated over half of the frame capture period to generate motion vectors used to predict the new frame at the halfway point.”; Paragraph [0034] “hybrid frame rate up-converter circuit 112 is configured to perform frame rate up-conversion on the sequence of image frames using motion compensated interpolation based, at least in part, on the estimated tile motion vectors. The estimated motion vectors, which are generated from the pixel motion vectors based on the pixel events… motion compensated interpolation circuit 502 is configured to generate interpolated video frames corresponding to time periods between the captured video frames by applying the tile motion vectors to tiles of a captured video frame to predict a new video frame at the next up-converted time period.”). However, Burns fails to teach the second frame data corresponding to a time later than a time to which the first frame data corresponds. Honda teaches the second frame data corresponding to a time later than a time to which the first frame data corresponds (Paragraph [0209] “the memory control unit 122 sets the time after the set inter-frame interval from the start time of the frame volume from which the event data is read immediately before from the memory 121 as the head of the second frame volume. Then, the memory control unit 122 reads, from the memory 121, the event data including the time t of the event within the time as the set frame width from the head, as the event data in the frame volume of the second frame”; Paragraph [0214] “the inter-frame interval setting unit 111 sets and supplies the inter-frame interval to the data generation unit 113.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns’ reference to include the second frame data corresponding to a time later than a time to which the first frame data corresponds taught by Honda’s reference. The motivation for doing so would have been to improve the result’s reliability to obtain frame data that produces an image with high visibility and smooth movement as suggested by Honda (see Honda, Paragraph [0138] and Paragraph [190]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Honda with Burns to obtain the invention specified in claim 1. Regarding claim 2, which claim 1 is incorporated, Burns discloses wherein the first obtaining circuit captures the object at the predetermined interval (Paragraph [0026] “The video signal processor 304 is configured to receive a sequence of image frames generated by a frame-based video camera 104 at a frame sampling rate (or period).”), and the second obtaining circuit detects the change in the pixel value (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”; Paragraph [0040] “Events are associated with a change in pixel illumination, either an increase or a decrease, which exceeds a threshold value.”). Regarding claim 3, which claim 1 is incorporated, Burns discloses wherein the processor generates the second frame data based on the first frame data and the motion of the object detected by using the event data corresponding to the time later than the time to which the first frame data corresponds (Paragraph [0028] “insert a new interpolated frame between two existing frames (e.g., at a 2× up-convert rate), the pixel events may be integrated over half of the frame capture period to generate motion vectors used to predict the new frame at the halfway point.”; Paragraph [0034] “hybrid frame rate up-converter circuit 112 is configured to perform frame rate up-conversion on the sequence of image frames using motion compensated interpolation based, at least in part, on the estimated tile motion vectors. The estimated motion vectors, which are generated from the pixel motion vectors based on the pixel events… motion compensated interpolation circuit 502 is configured to generate interpolated video frames corresponding to time periods between the captured video frames by applying the tile motion vectors to tiles of a captured video frame to predict a new video frame at the next up-converted time period.”). However, Burns fails to teach data corresponding to the time later than the time to which the first frame data corresponds. Honda teaches data corresponding to the time later than the time to which the first frame data corresponds (Paragraph [0209] “the memory control unit 122 sets the time after the set inter-frame interval from the start time of the frame volume from which the event data is read immediately before from the memory 121 as the head of the second frame volume. Then, the memory control unit 122 reads, from the memory 121, the event data including the time t of the event within the time as the set frame width from the head, as the event data in the frame volume of the second frame”; Paragraph [0214] “the inter-frame interval setting unit 111 sets and supplies the inter-frame interval to the data generation unit 113.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns’ reference to include data corresponding to the time later than the time to which the first frame data corresponds taught by Honda’s reference. The motivation for doing so would have been to improve the result’s reliability to obtain frame data that produces an image with high visibility and smooth movement as suggested by Honda (see Honda, Paragraph [0138] and Paragraph [190]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Honda with Burns to obtain the invention specified in claim 3. Regarding claim 8, which claim 1 is incorporated, Burns discloses an apparatus comprising the processing device (Paragraph [0013] “the disclosed techniques can be implemented, for example, in a computing system or a graphics processing system”)…and further comprising at least one of: an optical device corresponding to the processing device (Paragraph [0035] “The images captured by the frame-based camera 104 may not align precisely with the view from the dynamic vision sensor of the event-based camera 102 due to differences in viewing angle or perspective, or differences in characteristics of the lenses or other features of the devices.”); a control device configured to control the processing device (Paragraph [0055] “hardware elements may include processors… ASICs, programmable logic devices, digital signal processors…semiconductor devices,”); a signal processing device configured to process a signal output from the processing device (Paragraph [0024] “hybrid motion estimation circuit 106 is shown to include an event signal processor 302, a video signal processor 304”; Paragraph [0046] “processor (or processor cores) may be any type of processor… for example, a micro-processor… a digital signal processor (DSP)”); a display device configured to display information obtained by the processing device (Note that the claim requires only one of an optical device corresponding to the processing device; a control device configured to control the processing device; a signal processing device configured to process a signal output from the processing device; a display device configured to display information obtained by the processing device; a storage device configured to store information obtained by the processing device; and a mechanical device configured to operate based on information obtained by the processing device.); a storage device configured to store information obtained by the processing device (Paragraph [0045] “platform 810 may comprise any combination of a processor 820, a memory 830…a user interface 860, a display element 890, and a storage system 870.”); and a mechanical device configured to operate based on information obtained by the processing device (Paragraph [0055] “embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements…integrated circuits, ASICs, programmable logic devices, digital signal processors, FPGAs, logic gates, registers, semiconductor devices…”). Figure 8 PNG media_image1.png 766 546 media_image1.png Greyscale Regarding claim 9, Burns discloses a system comprising: a capturing device (Paragraph [0044] “system 800 to perform motion estimation using hybrid video imaging, configured in accordance with certain embodiments of the present disclosure. In some embodiments, system 800 comprises a platform 810 which may host, or otherwise be incorporated into a personal computer, workstation, laptop computer, ultra-laptop computer, tablet, touchpad, portable computer, handheld computer, palmtop computer… and so forth”); and a display device (Paragraph [0045] “platform 810 may comprise any combination of a processor 820…an input/output (I/O) system 850, an event-based video camera 102… a display element 890”), wherein the capturing device includes: a first obtaining circuit (Figure 3 element 304, video signal processor, equates to first obtaining unit) configured to obtain first frame data (image frames in paragraph [0026] equate to first frame data) that is frame data of an image of an object by capturing the object at a predetermined interval (Paragraph [0026] “The video signal processor 304 is configured to receive a sequence of image frames generated by a frame-based video camera 104 at a frame sampling rate (or period).”); a second obtaining circuit (Figure 3 element 302, event signal processor, equates to second obtaining unit) configured to obtain event data that is obtainable at an interval shorter than the predetermined interval and is a detection result of a change in a pixel value of the object by detecting the change in the pixel value of the object (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”); and a first processor configured to detect motion of the object (image dynamics in paragraph [0017] equates to motion of the object) at an interval shorter than the predetermined interval by using the event data (Paragraph [0028] “event integration circuit 306 is configured to integrate a subset (or all) of the sequence of pixel events, occurring within the frame sampling period between pairs of captured image frames. The integration is employed to generate a pixel motion vector representing motion of the pixel between those frames”); and a second processor configured to generate second frame data (generated interpolated video frames in paragraphs [0028] and [0034] equate to second frame data) from the first frame data and the motion information (Paragraph [0028] “insert a new interpolated frame between two existing frames (e.g., at a 2× up-convert rate), the pixel events may be integrated over half of the frame capture period to generate motion vectors used to predict the new frame at the halfway point.”; Paragraph [0034] “hybrid frame rate up-converter circuit 112 is configured to perform frame rate up-conversion on the sequence of image frames using motion compensated interpolation based, at least in part, on the estimated tile motion vectors. The estimated motion vectors, which are generated from the pixel motion vectors based on the pixel events… motion compensated interpolation circuit 502 is configured to generate interpolated video frames corresponding to time periods between the captured video frames by applying the tile motion vectors to tiles of a captured video frame to predict a new video frame at the next up-converted time period.”) However, Burns fails to teach a transmission interface configured to transmit [the first frame data and motion information regarding the motion of the object] to outside of the capturing device, and the display device includes: a receiving interface configured to receive [the first frame data and the motion information] from outside of the display device; the second frame data corresponding to a time later than a time to which the first frame data corresponds; and a display configured to display an image based on the second frame data. Honda teaches a transmission interface configured to transmit [the first frame data and motion information regarding the motion of the object] to outside of the capturing device (Paragraph [0290] “The sound/image output section 12052 transmits an output signal of at least one of a sound or an image to an output device capable of visually or auditorily notifying an occupant of the vehicle or the outside of the vehicle of information.”), and the display device includes: a receiving interface configured to receive [the first frame data and the motion information] from outside of the display device (Paragraph [0290] “transmits an output signal of at least one of a sound or an image to an output device capable of visually or auditorily notifying an occupant of the vehicle or the outside of the vehicle of information. In the example of FIG. 23, an audio speaker 12061, a display section 12062… are exemplified as the output device.”); the second frame data corresponding to a time later than a time to which the first frame data corresponds (Paragraph [0209] “the memory control unit 122 sets the time after the set inter-frame interval from the start time of the frame volume from which the event data is read immediately before from the memory 121 as the head of the second frame volume. Then, the memory control unit 122 reads, from the memory 121, the event data including the time t of the event within the time as the set frame width from the head, as the event data in the frame volume of the second frame”; Paragraph [0214] “the inter-frame interval setting unit 111 sets and supplies the inter-frame interval to the data generation unit 113.”); and a display configured to display an image based on the second frame data (Paragraph [0290] “transmits an output signal of at least one of a sound or an image to an output device capable of visually or auditorily notifying an occupant of the vehicle or the outside of the vehicle of information. In the example of FIG. 23, an audio speaker 12061, a display section 12062… are exemplified as the output device.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns’ reference to include a transmission interface configured to transmit [the first frame data and motion information regarding the motion of the object] to outside of the capturing device, and the display device includes: a receiving interface configured to receive [the first frame data and the motion information] from outside of the display device; the second frame data corresponding to a time later than a time to which the first frame data corresponds; and a display configured to display an image based on the second frame data taught by Honda’s reference. The motivation for doing so would have been to notify and display the information to the user, as well as to improve the result’s reliability to obtain frame data that produces an image with high visibility and smooth movement as suggested by Honda (see Honda, Paragraph [0090], Paragraph [0138], and Paragraph [190]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Honda with Burns to obtain the invention specified in claim 9. Regarding claim 10, Burns discloses a processing method comprising (Paragraph [0013] “the disclosed techniques can be implemented, for example, in a computing system or a graphics processing system”): obtaining first frame data (image frames in paragraph [0026] equate to first frame data) that is frame data of an image of an object at a predetermined interval (Paragraph [0026] “The video signal processor 304 is configured to receive a sequence of image frames generated by a frame-based video camera 104 at a frame sampling rate (or period).”); obtaining event data that is obtainable at an interval shorter than the predetermined interval and is a detection result of a change in a pixel value of the object (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”); detecting motion of the object (image dynamics in paragraph [0017] equates to motion of the object) at an interval shorter than the predetermined interval by using the event data (Paragraph [0017] “the event-based camera may employ a dynamic (or differential) vision sensor (DVS) to record the image dynamics. As such, events associated with changes in any pixel can be captured with a much greater temporal resolution than is possible with a frame-based video camera.”; Paragraph [0028] “event integration circuit 306 is configured to integrate a subset (or all) of the sequence of pixel events, occurring within the frame sampling period between pairs of captured image frames. The integration is employed to generate a pixel motion vector representing motion of the pixel between those frames”); and generating second frame data (generated interpolated video frames in paragraphs [0028] and [0034] equate to second frame data) from the first frame data and the motion of the object (Paragraph [0028] “insert a new interpolated frame between two existing frames (e.g., at a 2× up-convert rate), the pixel events may be integrated over half of the frame capture period to generate motion vectors used to predict the new frame at the halfway point.”; Paragraph [0034] “hybrid frame rate up-converter circuit 112 is configured to perform frame rate up-conversion on the sequence of image frames using motion compensated interpolation based, at least in part, on the estimated tile motion vectors. The estimated motion vectors, which are generated from the pixel motion vectors based on the pixel events… motion compensated interpolation circuit 502 is configured to generate interpolated video frames corresponding to time periods between the captured video frames by applying the tile motion vectors to tiles of a captured video frame to predict a new video frame at the next up-converted time period.”). However, Burns fails to teach the second frame data corresponding to a time later than a time to which the first frame data corresponds. Honda teaches, the second frame data corresponding to a time later than a time to which the first frame data corresponds (Paragraph [0209] “the memory control unit 122 sets the time after the set inter-frame interval from the start time of the frame volume from which the event data is read immediately before from the memory 121 as the head of the second frame volume. Then, the memory control unit 122 reads, from the memory 121, the event data including the time t of the event within the time as the set frame width from the head, as the event data in the frame volume of the second frame”; Paragraph [0214] “the inter-frame interval setting unit 111 sets and supplies the inter-frame interval to the data generation unit 113.”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns’ reference to include the second frame data corresponding to a time later than a time to which the first frame data corresponds taught by Honda’s reference. The motivation for doing so would have been to improve the result’s reliability to obtain frame data that produces an image with high visibility and smooth movement as suggested by Honda (see Honda, Paragraph [0138] and Paragraph [190]). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Honda with Burns to obtain the invention specified in claim 10. Regarding claim 11 (drawn to a method), claim 11 is rejected the same as claim 2 and the arguments similar to that presented above for claim 2 are equally applicable to the claim 11, and all the other limitations similar to claim 2 are not repeated herein, but incorporated by reference. Regarding claim 12 (drawn to a method), claim 12 is rejected the same as claim 3 and the arguments similar to that presented above for claim 3 are equally applicable to the claim 12, and all the other limitations similar to claim 3 are not repeated herein, but incorporated by reference. Claims 4-7 and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Burns et al. (US 2018/0098082 A1) (hereinafter, “Burns”) in view of Honda et al. (US 2022/0030185 A1) (hereinafter, “Honda”) further in view of He et al. ("Timereplayer: Unlocking the potential of event cameras for video interpolation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.) (hereinafter, “He”). Regarding claim 4, which claim 1 is incorporated, Burns discloses [wherein the processor further generates,] in a case where a difference between the first frame data and the second frame data is larger than a threshold (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”; Paragraph [0040] “Events are associated with a change in pixel illumination, either an increase or a decrease, which exceeds a threshold value.”). However, Burns and Honda fail to teach wherein the processor further generates…third frame data by using third data that has a value between first data and second data, the first data corresponding to at least part of a region of the object in the first frame data, and the second data corresponding to at least part of a region of the object in the second frame data. He teaches wherein the processor further generates…third frame data (target frame on Page 17807 right column last paragraph equates to third frame data) by using third data (average of two warped frames on Page 17807 right column last paragraph equates to third date) that has a value between first data and second data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”), the first data corresponding to at least part of a region of the object in the first frame data (input frame at t0 on Page 17808 left column first paragraph equates to first frame) (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”), and the second data corresponding to at least part of a region of the object in the second frame data (input frame at t1 on Page 17808 left column first paragraph equates to first frame (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns in view of Honda to include wherein the processor further generates…third frame data by using third data that has a value between first data and second data, the first data corresponding to at least part of a region of the object in the first frame data, and the second data corresponding to at least part of a region of the object in the second frame data taught by He’s reference. The motivation for doing so would have been to address complex motion and reconstruct high-quality intermediate frames as suggested by He (see He, Page 17811, Section 5 Conclusion right column). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine He with Burns and Hinda to obtain the invention specified in claim 4. Regarding claim 5, which claim 4 is incorporated, Burns and Honda fail to teach wherein the value of the third data is an average value between a value of the first data and a value of the second data. He teaches wherein the value of the third data (average of two warped frames on Page 17807 right column last paragraph equates to third date) is an average value between a value of the first data and a value of the second data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”) Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns in view of Honda to include wherein the value of the third data is an average value between a value of the first data and a value of the second data taught by He’s reference. The motivation for doing so would have been to address complex motion and reconstruct high-quality intermediate frames as suggested by He (see He, Page 17811, Section 5 Conclusion right column). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine He with Burns and Honda to obtain the invention specified in claim 5. Regarding claim 6, which claim 1 is incorporated, Burns teaches [wherein the processor further generates], in a case where a difference between the first frame data and the second frame data is larger than a threshold (Paragraph [0025] “The event signal processor 302 is configured to receive a sequence of pixel events, generated asynchronously by an event-based video camera 102. The events, which represent illumination change in a pixel, are captured asynchronously with relatively high temporal resolution.”; Paragraph [0040] “Events are associated with a change in pixel illumination, either an increase or a decrease, which exceeds a threshold value.”). However, Burns and Honda fail to teach wherein the processor further generates…third frame data by using third data that has a value between first data and second data, the first data corresponding to a part of a region of the object in the first frame data, and the second data corresponding to the part of a region of the object in the second frame data. He teaches wherein the processor further generates…third frame data by using third data that has a value between first data and second data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”), the first data corresponding to a part of a region of the object in the first frame data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”), and the second data corresponding to the part of a region of the object in the second frame data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns in view of Honda to include wherein the processor further generates…third frame data by using third data that has a value between first data and second data, the first data corresponding to a part of a region of the object in the first frame data, and the second data corresponding to the part of a region of the object in the second frame data taught by He’s reference. The motivation for doing so would have been to address complex motion and reconstruct high-quality intermediate frames as suggested by He (see He, Page 17811, Section 5 Conclusion right column). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine He with Burns and Honda to obtain the invention specified in claim 6. Regarding claim 7, which claim 6 is incorporated, Burns and Honda fail to teach wherein the value of the third data is an average value between a value of the first data and a value of the second data. He teaches wherein the value of the third data is an average value between a value of the first data and a value of the second data (Page 17807 right column last paragraph continued on to Page 17808 left column first paragraph “The target frame Ît can be synthesized by blending the warped input frames using refined optical flows. The blending process is taken as a weighted 17807 average of two warped frames with the product of time interval and visibility map as weights… as long as two input frames at two time stamps t0 and t1, and the event streams between these two time stamps and the targeted one t are given, we could synthesize the desired frame at that time stamp”). Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Burns in view of Honda to include wherein the value of the third data is an average value between a value of the first data and a value of the second data taught by He’s reference. The motivation for doing so would have been to address complex motion and reconstruct high-quality intermediate frames as suggested by He (see He, Page 17811, Section 5 Conclusion right column). Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine He with Burns and Honda to obtain the invention specified in claim 7. Regarding claim 13 (drawn to a method), claim 13 is rejected the same as claim 4 and the arguments similar to that presented above for claim 4 are equally applicable to the claim 13, and all the other limitations similar to claim 4 are not repeated herein, but incorporated by reference. Regarding claim 14 (drawn to a method), claim 14 is rejected the same as claim 5 and the arguments similar to that presented above for claim 5 are equally applicable to the claim 14, and all the other limitations similar to claim 5 are not repeated herein, but incorporated by reference. Regarding claim 15 (drawn to a method), claim 15 is rejected the same as claim 6 and the arguments similar to that presented above for claim 6 are equally applicable to the claim 15, and all the other limitations similar to claim 6 are not repeated herein, but incorporated by reference. Regarding claim 16 (drawn to a method), claim 16 is rejected the same as claim 7 and the arguments similar to that presented above for claim 7 are equally applicable to the claim 16, and all the other limitations similar to claim 7 are not repeated herein, but incorporated by reference. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Imagawa et al. (WO 2006/137253 A1) discloses a device that generates a new motion image based on the input of a first motion image and a second motion image. Tseng et al. (US 2023/0351552 A1) discloses an apparatus that receives a plurality of consecutive images and generates intermediate images, that are then used to generate an output image with reduced blurriness. Suzuki et al. (JP 5,980,618 B2) discloses an apparatus the acquires frame images with a motion between the frames to generate a predicted frame image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UROOJ FATIMA/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month