Prosecution Insights
Last updated: April 19, 2026
Application No. 18/692,031

MOTION MAGNIFICATION DEVICES AND METHODS OF USING THEREOF

Non-Final OA §101§102§103§112
Filed
Mar 14, 2024
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Postech Research And Business Development Foundation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment A Preliminary Amendment was made 03/14/2024 to amend the specification and the abstract; claims 1-11 are pending, including amendments to claims 1, 2, 6, 7, 10, 11; claim 12 is cancelled. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/14/2024, 06/11/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are considered by examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations and associated structure and function are: Claim 1: “encoder” Figure 2, element 231, described in prose on pg 11 ln 2-21. “first module” Figure 2, element 233, described in prose on pg 11 ln 22-pg 12 ln 17. “second module” Figure 2, element 235, described in prose on pg 12 ln 18-pg 14 ln 7. “third module” Figure 2, element 237, described in prose on pg 14 ln 8-23. The methodology of using the encoder and modules is further shown in Figure 4 and described in prose on pg 14 ln 24-pg 17 ln 1. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim currently recites “A recording medium having a program performing the method of claim 6, which is stored therein” and the full scope of “recording medium having a program performing the method” includes transitory signals. In this case, while the specification exemplifies various forms of a medium, it does not disavow transitory signals (“a recording medium to perform the motion amplification method” pg 5 ln 3-8, specifically ln 7-8). The specification is silent regarding whether the medium can “propagate, or transport the program.” Because the specification does not explicitly disavow signals, “a recording medium” is therefore interpreted to include signals by one or ordinary skill in the art. The claim language does not clearly state that signals are excluded and the broadest reasonable interpretation is to therefore include transitory signals, which is not patent eligible subject matter. The state-of-the-art at the time the invention was made included signals, carrier waves and other wireless communication modalities (e.g., RF, infrared, etc.) as media on which executable code was recorded and from which computers acquired such code. Thus, the full scope of the claim covers "signals" and their equivalents, which are non-statutory per se. (see In re Nuijten). The examiner suggests clarifying the claim to exclude such non-statutory signal embodiments, such as (but not limited to) reciting a "non-transitory computer recordable medium", or equivalent, consistent with the corresponding original disclosure. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 6-9, 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Oh et al (Learning-based Video Motion Magnification). Regarding Claim 6, Oh et al teach a motion amplification method comprising: receiving a first frame and a second frame arbitrarily adjacent in order within an image for an object (Input Frame X a and Input Frame X b ; Fig 2 and 3.2 Deep CNN ¶ 1-3), and decomposing the first frame into first shape information and first texture information (Input Frame X a is analyzed for shape and texture data; Fig 2 and 3.2 Deep CNN ¶ 3-5), and decomposing the second frame into second shape information and second texture information (Input Frame Frame X b is analyzed for shape and texture data; Fig 2 and 3.2 Deep CNN ¶ 3-5); generating a third frame in which a motion of the object is amplified based on the first shape information, the second shape information, and the second texture information (the manipulator considers the encoder shape and texture data of Input Frame X a and Input Frame X b plus magnification factor α, with non-linear manipulator to magnify the motion with reduced noise, and a frame demonstrating the motion is output as a temporal filtered version magnified frame; Fig 2, 3 and 3.2 Deep CNN ¶ 3-5, Temporal operation); analyzing an intensity of the motion based on the first shape information, the second shape information, and the first texture information (the amplified motion data, based on the texture and shape of the object in the input frames, is analyzed to determine the amount of motion using the manipulator; Fig 2, 3 and 3.2 Deep CNN ¶ 3-5, Temporal operation); and generating amplification image data indicating the intensity of the motion on the third frame (the encoder data of Input Frame X a and Input Frame X b plus magnification factor α, and encoded data of Input Frame X b are input to decoder to generate Magnified Frame Y ^ represents video motion magnification; Fig 2 and 3.2 Deep CNN ¶ 3-5, Temporal operation). Regarding Claim 7, Oh et al teach the motion amplification method of claim 6 (as described above), wherein the generating of the third frame includes multiplying a difference between the first shape information and the second shape information by a predetermined amplification coefficient to generate new shape information (magnification factor α is applied to multiply the difference of the motion of the object, that considers the shape and texture of the object between the first and second input frames; Fig 2, 3 and 3.2 Deep CNN ¶ 3-5, Temporal operation), and synthesizing the generated shape information and the second texture information to generate the third frame (the (third) frame representing the amplified motion difference in the input frames accounts for the shape and texture representation Fig 2, 3 and 3.2 Deep CNN ¶ 3-5, Temporal operation). Regarding Claim 8, Oh et al teach the motion amplification method of claim 7 (as described above), wherein the analyzing of the intensity of the motion includes calculating each pixel change between the first frame and the second frame based on the first shape information and the second shape information (changes in motion are analyzed at the pixel level, and the subpixel motion is determined to assess pixel motion; Fig 2, 3 and 3.2 Temporal operation, 3.3 Subpixel motion generation), and analyzing the intensity of the motion of the object based on each calculated pixel change (a pixel-wise analysis is performed to determine the temporal changes across the two input images; Fig 2, 3 and 3.2 Temporal operation). Regarding Claim 9, Oh et al teach the motion amplification method of claim 8 (as described above), wherein the analyzing of the intensity of the motion further includes analyzing the intensity of the motion of the object by using a convolutional neural network (CNN) trained to analyze the intensity of the motion from input shape information of arbitrary frames (a deep convolutional neural network is applied to reconstruct a magnified frame and determine the object motion, with the CNN trained based on object shape and texture of various objects; Fig 2 and 3.2 Deep CNN Architecture, 3.3 Synthetic Training Dataset). Regarding Claim 11, Oh et al teach a recording medium having a program (network architecture stored in a memory and used to perform the motion magnification and analysis; Fig 2 and 3.2 Deep CNN Architecture ¶ 1-2) performing the method of claim 6 (as described above), which is stored therein. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Oh et al (Learning-based Video Motion Magnification) in view of Ordonez et al (Detection of human vital signs in hazardous environments by means of video magnification). Regarding Claim 10, Oh et al teach the motion amplification method of claim 9 (as described above), wherein the generating of the third frame includes determining that an area in which the motion exceeding a vibration threshold (a magnification factor α is limited up to 100 and the sample of the input motion is up to 10 pixels so magnified motion does not exceed 30 pixels; 3.3 Input motion and amplification factor). Oh et al does not teach the detected motion is a dangerous area which needs to be checked when the intensity of the motion exceeds the vibration threshold prestored for the object, and generating the amplification image data to indicate the dangerous area on the amplification image data. Ordonez et al is analogous art pertinent to the technological problem addressed in the current application and teaches the detected motion is a dangerous area which needs to be checked when the intensity of the motion exceeds the vibration threshold prestored for the object (a magnification factor α is limited to perceive different movements in the video and uses an upper limit octave bandwidth for analysis in determining movement of people, which the system can be applied to accidents involving people; Fig 1, 4, 5 and Methodology – Temporal filtering and magnification, Simulating a scenario with injured people), and generating the amplification image data to indicate the dangerous area on the amplification image data (the image data is magnified to a degree required to determine movement, with the smallest movement indicating breathing; Fig 1, 4, 5 and Methodology – Temporal filtering and magnification, Simulating a scenario with injured people). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to combine the teachings of Oh et al with Ordonez et al including the detected motion is a dangerous area which needs to be checked when the intensity of the motion exceeds the vibration threshold prestored for the object, and generating the amplification image data to indicate the dangerous area on the amplification image data. By changing the magnification value, the perceptible different movements in the input video may be determined, thereby improving the ability to perceive people injured using image processing techniques and thereby reduce risks for responders in a cost effective approach, as recognized by Ordonez et al (Introduction). Allowable Subject Matter Claims 1-5 are allowed. Claim 1 is interpreted under 35 U.S.C. § 112(f), as discussed above. The claim is considered novel over the prior art for not teaching the entirety of the interpreted claim limitations. Claim 1 recites: A motion amplification device comprising: an encoder configured to receive a first frame and a second frame arbitrarily adjacent in order within an image for an object, and decompose the first frame into first shape information and first texture information and decompose the second frame into second shape information and second texture information; a first module configured to generate a third frame in which a motion of the object is amplified based on the first shape information, the second shape information, and the second texture information; a second module configured to analyze an intensity of the motion based on the first shape information, the second shape information, and the first texture information; and a third module configured to generate amplification image data indicating the intensity of the motion on the third frame. Claims 2-5 are dependent on claim 1 and therefore allowed for similar reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dorkenwald et al (Unsupervised Magnification of Posture Deviations Across Subjects) teach motion magnification techniques that analyzes posture differences over time without the use of keypoint annotation or visualized deviations to determine the motion magnification. Zhou et al (US 2017/0083748) teach a system and method for detecting and tracking dynamic objects, including motion magnification techniques Berlin et al (US 2020/0134791) teach a system and method for determining spatio-temporal differences for high dynamic range imaging including analysis of texture and shape in analyzing motion over time. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 14, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month