Prosecution Insights
Last updated: April 19, 2026
Application No. 17/353,425

SPECTRUM ALGORITHM WITH TRAIL RENDERER

Final Rejection §103
Filed
Jun 21, 2021
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Lemon Inc.
OA Round
10 (Final)
68%
Grant Probability
Favorable
11-12
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment This is in response to applicant’s amendment/response filed on 12/15/2025, which has been entered and made of record. Claims 1, 4, 9, 12, 15 have been amended. Claims 2, 5, 10, 16 have been cancelled. Claims 1, 3-4, 6-9, 11-15, 17-23 are pending in the application. As an initial matter, the rejections under 35 USC 112 for claims 1, 3-4, 6-9, 11-15, 17-23 have been withdrawn in view of applicant's amendments. Response to Arguments Applicant's arguments filed on 12/15/2025 regarding claims rejection under 35 U.S.C 103 have been fully considered but they are not persuasive. Applicant submits “Applicant respectfully submits that the applied references, whether taken individually or in combination, fail to disclose, teach or suggest at least some limitations of claim 1. The Office acknowledges, on pages 18 and 19 of the Office Action, that Wehner does not disclose "applying the average frequencies for different frequency groups to update the one or more audio visualizations of the one or more graphics or effects to be applied to the trail particles." As asserted by the Office, Wehner merely discloses audio visualization, i.e., by converting music/sound properties to visual effects. Adiletta is cited for allegedly disclosing the above features. However, Adiletta does not remedy the deficiencies of Wehner. Adiletta discloses, at the Abstract, "a visual performance using particles by varying each particle's position, velocity, and color based on parameters extracted via digital processing of input audio." As such, Adiletta discloses, at most, visualization of input audio by particles, which is entirely different from "update the one or more audio visualizations to control one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters," as recited in claim 1. " (Remarks, Page 11) The examiner disagrees with Applicant’s premises and conclusion. Wehner, teaches in ¶0077, “Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “track the movement of the physical object and derive position/size values based on the tracking, is also configured to manipulate the XR visual (i.e., control visual features of the XR visual), generated by the XR controller and overlaid on the physical object in the scene”. This example can teach " updating the one or more video visualization parameters to control a behavior of the trail particles" and "update the one or more audio visualizations to control one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters.”. Adiletta in Page 4, teaches “calculated a running average of the audio frequency bins” “The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values.” Page 5 and 6 teaches “12 particle groups” and the particle simulation. Adiletta thus teaches the term “applying the average frequencies for different frequency groups to update the one or more audio visualizations to control one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-9, 11-15, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wehner et al. (US Pub 2019/0005733 A1) in view of Adiletta, Matthew Joseph, and Oliver Thomas. "An artistic visualization of music modeling a synesthetic experience." arXiv preprint arXiv:2012.08034 (2020), further in view of Kim et al. (US 2020/0351450 A1). As to claim 1, Wehner discloses a method for rendering motion-audio visualizations to a display (Wehner, ¶0077, “The DAW could communicate back to the bridge musical/sound attributes such as tempo, decibel level, time signature, MIDI messages generated by the DAW, the sound generated, and more. Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.”), the method comprising: obtaining video data comprising one or more video frames (¶0060, “The physical objects can be tracked utilizing a camera 1.2 associated with tracking logic implemented, e.g., in a controller.”); obtaining a visualization template from a visualization template database to be applied to each of the one or more video frames, wherein the visualization template is configured to define one or more video visualization parameters and one or more audio visualization parameters (Wehner, ¶0077-0078 teaches “Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” This teaches “define one or more video visualization parameters and one or more audio visualization parameters” The orb or a glowing sun could be a visualization template. ¶0119, “X, Y, and Z parameters could be determined relative to either a user constructed virtual 3D space such as a cube or cuboid, or pre-generated 3D spaces that are tied to different “scenes” of a visual representation.” ¶0131, “The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence)” ¶0147, “The XR visualization may include an animated object (e.g., a fiery orb) that overlays a tracked position of the object in the scene. The visual features may include 3D position, multi-dimensional rotation, brightness, color, shading, size (e.g., diameter), and rotation of the animated object as displayed. The brightness, color, shading, size, and rotation of the animated object may each change between different values/characteristics thereof responsive to a change in a corresponding one of the movement parameters, for example. Changes to the XR visualization may include addition of further XR visualization on top of or beside the original XR visualization. Also, 3D positional movement of the physical object may not change a corresponding 3D position of the physical object or overlay in the XR visualization/scene as displayed, but rather may change a non-3D positional visual feature such as size (from a smaller size to a larger size or vice versa), color (e.g., from blue to green or vice versa), or shape (e.g., from an orb to a star). That is, movement of the physical object may result a change between values/characteristics of a non-movement visual feature.”); determining a position of a target object in each of the one or more video frames (¶0115, “FIG. 10 shows an embodiment of how the physical object can be tracked along the Z-axis within a 2D frame generated by a directional sensor such as a camera.” ¶0146, “a video camera of XR controller 1.5 captures video/images of 3D moveable object 1.1 in a scene, and the XR controller tracks movement of the object in in the scene (e.g., including 3D position and multi-dimensional rotation of the object, such as X, Y, and Z position and rotation about X, Y, and Z axes) based on the captured video/images, to produce multiple movement parameters or “movement signals” (e.g., 3D position and rotation parameters or movement signals) that define the movement in 3D space.”), wherein the target object is configured by the visualization template (¶0131, “The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence)” ¶0147, “The XR visualization may include an animated object (e.g., a fiery orb) that overlays a tracked position of the object in the scene. The visual features may include 3D position, multi-dimensional rotation, brightness, color, shading, size (e.g., diameter), and rotation of the animated object as displayed. The brightness, color, shading, size, and rotation of the animated object may each change between different values/characteristics thereof responsive to a change in a corresponding one of the movement parameters, for example. Changes to the XR visualization may include addition of further XR visualization on top of or beside the original XR visualization. Also, 3D positional movement of the physical object may not change a corresponding 3D position of the physical object or overlay in the XR visualization/scene as displayed, but rather may change a non-3D positional visual feature such as size (from a smaller size to a larger size or vice versa), color (e.g., from blue to green or vice versa), or shape (e.g., from an orb to a star). That is, movement of the physical object may result a change between values/characteristics of a non-movement visual feature.”); determining, based on the position of the target object in each of the one or more video frames, one or more positions of particle emitters in the corresponding video frame that are distinct from the position of the target object (Wehner, Fig .15, ¶0077, “Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect. It may also be generalized to using other message protocols, such as OSC, or to feed audio data into the XR controller to visualize it. Fig. 19, ¶0131, “FIG. 19 provides an example of how the physical object can be visualized in a music context. The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence), and so on.” ¶0097, “the visual and the sun have a venue relative position above the front of the audience, and would appear to be above that person.” ¶0144, “levitating above the audience and its movements are directly manipulated by the producer in real time relative to the movements of the physical object in the producer's hand or based on the input of a sensor or controller 2.8 such as a sensored glove.” Fig. 27 and ¶0138 teaches another example with “a trail of light particles could be left behind as the physical object moves through space, like the trail of light left by a sparkler. The light particles could also vibrate and react based on feedback parameters coming from the bridge or 3rd Party Software.”), wherein the particle emitters control a source of trail particles to be emitted from the target object with the one or more video visualization parameters to indicate a real-time position of the target object in the video data (Wehner, ¶0035, “where holographic visualizations, such as a glowing sun, could be viewed overhead and their movement manipulated in real-time by the DJ and the music that is being generated.” ¶0056, “XR provides an opportunity for a new type of input device which can utilize movement and position tracking of “real world” physical objects in 3D space as input, and holographic visualizations layered on top of those “real world” physical objects which are manipulated and visualized in real-time to provide feedback to a user.” ¶0077, “Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect. It may also be generalized to using other message protocols, such as OSC, or to feed audio data into the XR controller to visualize it. Fig. 19, ¶0131, “FIG. 19 provides an example of how the physical object can be visualized in a music context. The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence), and so on.” Fig. 27 and ¶0138, “a trail of light particles could be left behind as the physical object moves through space, like the trail of light left by a sparkler. The light particles could also vibrate and react based on feedback parameters coming from the bridge or 3rd Party Software.”. “the trail of light left by a sparkler” could also be a particle emitter and it indicate a real-time position of the target object in the video data.); obtaining audio data (¶0163m “audio interfaces”. ¶0170, “a response back from the DAW, such as a MIDI control messages that represents beats per minute of the audio, or the direct audio feed that is output by the Digital Audio Workstation.” ¶0173, “If the bridge receives a raw audio stream, the bridge can process the raw audio stream, including performing a frequency analysis on the audio stream, to determine properties of the music such as decibel level, beat drops, and tonal quality, and then (i) normalize and deliver those properties to the XR Controller, and (2) use the normalized properties to either trigger specific effects or to progressively manipulate an effect over time based on a range of numerical values that will be send multiple times per second based on the auditory properties at each timestamp. 3516 represents the viewing mechanism that was discussed in 3510.”); determining a frequency spectrum from the audio data for a predetermined time period to obtain frequency spectrum data (¶0082, “a DAW sending a frequency spectrum of an audio track to the bridge, which relays the frequency spectrum to the internal viewing mechanism inside the networked device (or the XR controller display) or to a DMX-based lighting system.” ¶0173, “performing a frequency analysis on the audio stream, to determine properties of the music such as decibel level, beat drops, and tonal quality, and then (i) normalize and deliver those properties to the XR Controller”); updating the one or more video visualization parameters and the one or more audio visualization parameters of the visualization template, based at least in part on the frequency spectrum data of the audio data and the real-time position of the target object in the video data, by updating the one or more positions of the particle emitters to emit the trail particles, updating the one or more video visualization parameters to control a behavior of the train particles (Wehner, ¶0077, “Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “used to track the movement of the physical object and derive position/size values based on the tracking, is also configured to manipulate the XR visual (i.e., control visual features of the XR visual), generated by the XR controller and overlaid on the physical object in the scene, responsive to this resize message.“ ¶0082, “a DAW sending a frequency spectrum of an audio track to the bridge, which relays the frequency spectrum to the internal viewing mechanism inside the networked device (or the XR controller display) or to a DMX-based lighting system.” ¶0173, “performing a frequency analysis on the audio stream, to determine properties of the music such as decibel level, beat drops, and tonal quality, and then (i) normalize and deliver those properties to the XR Controller, and (2) use the normalized properties to either trigger specific effects or to progressively manipulate an effect over time” Fig. 27 and ¶0138, “a trail of light particles could be left behind as the physical object moves through space, like the trail of light left by a sparkler. The light particles could also vibrate and react based on feedback parameters coming from the bridge or 3rd Party Software.”.); determining audio visualizations based at least in part on the updated one or more audio visualization parameters, the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the frequency spectrum (¶0086, “the user may decide that a vertical movement of the physical object along the Y axis controls the VCF filter cutoff frequency of a synthesizer connected via MIDI as well as the brightness of a spotlight connected via DMX as well as brightness of a XR visual such as a glowing sun layered on top of the physical object.” ¶0151, “a DAW, messages representative of sound attributes of sound generated by the external entity, e.g. in PCM format, or in the form of MIDI and/or OSC messages, converts the sound attributes to visually renderable information, e.g. in the form of floating point value vectors created by a Fast Fourier Transform (FFT) applied to the frequency spectrum, and transmits the visually renderable information (i.e., information configured for XR visualization or changes thereto) to the XR controller. The XR controller receives the visually renderable information and changes the visual features of the XR visualization responsive to the visually renderable information.”); and generating a rendered video by applying the audio visualizations at the one or more positions of particle emitters associated with the target object in the one or more video frames for the predetermined time period (Wehner, ¶0042, “tracks a position and a rotation of 3D objects and/or the position of a user or multiple users in space with computer vision technology or alternate positional tracking technology; interprets position and rotation of 3D objects in 3D space as a method of controlling external entities, such as the 3rd party software applications; communicates in real-time between tracking enabled computer devices and the 3rd party software applications; and provides visual feedback layered on top of real world physical objects and spaces via extended reality viewing mechanisms.” ¶0077, “The DAW could communicate back to the bridge musical/sound attributes such as tempo, decibel level, time signature, MIDI messages generated by the DAW, the sound generated, and more. Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect. It may also be generalized to using other message protocols, such as OSC, or to feed audio data into the XR controller to visualize it. In another embodiment, messages indicative of movements of fingers of a sensored glove may be sent to the XR device, in order to visualize them as a “virtual hand” that moves in correlation to these movements.”). Wehner does not explicitly discloses “dividing the frequency spectrum data into a predefined number of different frequency groups based on a frequency range; determining an average frequency for each frequency group for every predetermined time period, the average frequencies for different frequency groups are used to change one or more audio visualizations to be added to the respective video frames for a duration of the predetermined time period; determining audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period”, and “applying the average frequencies for different frequency groups to update the one or more audio visualizations to control the one or more graphics or effects to be applied to the trail particles”. Adiletta teaches dividing the frequency spectrum data into a predefined number of different frequency groups based on a frequency range (Adiletta, page 4, “Rather than doing a unique note interpretation, we grouped frequencies into 12 unique frequency ranges (four low frequency, four middle frequency, and four high frequency). Each of these frequency groups is called a bin.” “We further process the FFT amplitudes by combining raw 512 data points into 12 groups. This is accomplished using a cascading splitter (found in the patcher create-bins). We created a cascading splitter using the jit.split function followed by a jit.3m which averages the points. In the first instance of our cascading splitter, we group the first two points of the FFT representing 0 - 86 Hz. Then we average the two points to get our average bin value. Then we scale the bin to normalize across all 12 bins so the “100%” of each bin has the same amplitude value Now we have 12 unique frequency bins that we can use for our visualization.”); determining an average frequency for each frequency group for every predetermined time period, the average frequencies for different frequency groups are used to change one or more audio visualizations to be added to the respective video frames for a duration of the predetermined time period (Adiletta, page 4, “The approach we took is we first, calculated a running average of the audio frequency bins (p create-avg-bins). Then we took the absolute value of the difference between the current frequency bins (p create-bins) from the averaged frequency bins (p create-avg-bins) which produced articulation data (p current-volatility).” “We then computed the average volatility over eight data points, essentially deriving the average volatility of each bin over .186 seconds:” “Now that we have all this data readily available for use, we can create the visualization. The first visualization we created was graphing all the data at once. The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values. There is also a matrixctl object that indicates which triggers are active (p Visualizing).”); determining audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period (Adiletta, page 4, “Now that we have all this data readily available for use, we can create the visualization. The first visualization we created was graphing all the data at once. The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values. There is also a matrixctl object that indicates which triggers are active (p Visualizing).” Page 5, “Since there are 12 frequency bins, we created 12 gravity points, separated along the y axis so that the lowest frequency bin has its gravity point moving around y = -12, and the highest frequency bin has its gravity point moving around y = 10.” “The RGB value for setting the color of the particle. This is derived using the average-frequency-bin values.” “All of these parameters are set in a unique patcher for each frequency bin”), applying the average frequencies for different frequency groups to update the one or more audio visualizations to control one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters (Adiletta, page 4, “The approach we took is we first, calculated a running average of the audio frequency bins (p create-avg-bins).” “The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values.” Page 5 and Page 6 teaches “12 particle groups” and the particle simulation.) Wehner and Adiletta are considered to be analogous art because all pertain to visual effect. It would have been obvious before the effective filing date of the claimed invention to have modified Wehner with the features of “dividing the frequency spectrum data into a predefined number of different frequency groups based on a frequency range; determining an average frequency for each frequency group for every predetermined time period, the average frequencies for different frequency groups are used to change one or more audio visualizations to be added to the respective video frames for a duration of the predetermined time period; determining audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period” and “applying the average frequencies for different frequency groups to update the one or more audio visualizations of the one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters” as taught by Adiletta. The suggestion/motivation would have been in order to generates a visual performance using particles by varying each particle's position, velocity, and color based on parameters extracted via digital processing of input audio (Adiletta. Abstract). To further support obvious and address applicant’s augments, Kim also teaches updating the one or more audio visualization parameters of the visualization template including one or more graphics or effects, based at least in part on the frequency spectrum data of the audio data and the path or position of the target object in the video data (Kim, ¶0089, “when a frequency of background music corresponds to a preset frequency band and an object is recognized from a foreground of a video, the user terminal 110 may apply a special effect to a position of the recognized object.” ¶0090, “the user terminal 110 may determine temporal information 531 based on a first feature 320 extracted from background music 310, and determine spatial information 532 based on a second feature 325 extracted from a video 315. For example, the user terminal 110 may apply a special effect to a video at a point in time at which a frequency of background music corresponds to a preset frequency band. In this example, when an object is recognized from a foreground of the video, the user terminal 110 may apply the special effect to a position of the recognized object.”). Wehner, Adiletta and Kim are considered to be analogous art because all pertain to visual effect. It would have been obvious before the effective filing date of the claimed invention to have modified Wehner with the features of “updating the one or more audio visualization parameters of the visualization template including one or more graphics or effects, based at least in part on the frequency spectrum data of the audio data and the path or position of the target object in the video data” as taught by Kim. The suggestion/motivation would have been in order to apply a special effect associated with the background music to the video (Kim. Abstract). As to claim 3, claim 1 is incorporated and the combination of Wehner, Adiletta and Kim discloses the trail particles include graphics that are rendered based on the frequency spectrum of the audio data and are applied at the positions of particle emitters in the corresponding frame of the video data (Wehner, ¶0086, “the user may decide that a vertical movement of the physical object along the Y axis controls the VCF filter cutoff frequency of a synthesizer connected via MIDI as well as the brightness of a spotlight connected via DMX as well as brightness of a XR visual such as a glowing sun layered on top of the physical object.” ¶0151, “a DAW, messages representative of sound attributes of sound generated by the external entity, e.g. in PCM format, or in the form of MIDI and/or OSC messages, converts the sound attributes to visually renderable information, e.g. in the form of floating point value vectors created by a Fast Fourier Transform (FFT) applied to the frequency spectrum, and transmits the visually renderable information (i.e., information configured for XR visualization or changes thereto) to the XR controller. The XR controller receives the visually renderable information and changes the visual features of the XR visualization responsive to the visually renderable information.” Fig. 19, ¶0131, “The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence), and so on.”). As to claim 4, claim 1 is incorporated and the combination of Wehner, Adiletta and Kim discloses the one or more video visualization parameters are associated with the video data (Wehner, ¶0077, “The DAW could communicate back to the bridge musical/sound attributes such as tempo, decibel level, time signature, MIDI messages generated by the DAW, the sound generated, and more. Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” Fig. 19, ¶0131, “The physical object could be transformed into a glowing, fiery orb and could expand and contract based on the tempo of a song (e.g., as the tempo increases and decreases), the overall size of the orb could increase or decrease based on decibel level or loudness units relative to full scale (LUFs—an audio term for volume level) (e.g., as the loudness increases and decreases), eruptions out of the orb could be based on beat drops, luminescence and color of the orb could be based on the physical objects rotation and position (e.g., rotate left or right transforms color from blue to red, and vice versa, while movement left or right increases and decreases luminescence), and so on.”). As to claim 6, claim 1 is incorporated and the combination of Wehner, Adiletta and Kim discloses determining the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period comprises controlling the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period (Wehner, ¶0077, “The DAW could communicate back to the bridge musical/sound attributes such as tempo, decibel level, time signature, MIDI messages generated by the DAW, the sound generated, and more. Then the bridge could convert (e.g., normalize) those parameters to visually renderable information (e.g., parameters that can be used to change/control visual features of the visualizations, and that represent control messages configured for controlling/changing the visualizations) and send those parameters as converted to the XR Controller, so that the internal and external viewing mechanisms can layer in animations and visualizations on top of a physical object that react in real time to the music (i.e., visual features of the visualizations change responsive to the parameters as converted). As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” Adiletta, page 4, “Now that we have all this data readily available for use, we can create the visualization. The first visualization we created was graphing all the data at once. The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values. There is also a matrixctl object that indicates which triggers are active (p Visualizing).” Page 5, “Since there are 12 frequency bins, we created 12 gravity points, separated along the y axis so that the lowest frequency bin has its gravity point moving around y = -12, and the highest frequency bin has its gravity point moving around y = 10.” “The RGB value for setting the color of the particle. This is derived using the average-frequency-bin values.” “All of these parameters are set in a unique patcher for each frequency bin”). As to claim 7, claim 6 is incorporated and the combination of Wehner, Adiletta and Kim discloses the audio visualization parameters comprises at least one parameter selected from a group comprising of a width, a height, color, and/or brightness of the one or more graphics or effects to be added to the one or more video frames of the video data (Wehner, ¶0078, “an increase or decrease in the MIDI CC value results in a corresponding increase or decrease in the size of the XR visual. This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect. It may also be generalized to using other message protocols, such as OSC, or to feed audio data into the XR controller to visualize it.”). As to claim 8, claim 1 is incorporated and the combination of Wehner, Adiletta and Kim discloses generating the rendered video comprises: generating a visualization layer with the trail particles at each position of the particle emitters (Wehner, ¶0077, “As an example, the physical object could be transformed into an orb or a glowing sun that expands and contracts to the beat, and explodes and puts out solar flares when the beat drops, and so on.” ¶0078, “The software in the XR controller, used to track the movement of the physical object and derive position/size values based on the tracking, is also configured to manipulate the XR visual (i.e., control visual features of the XR visual), generated by the XR controller and overlaid on the physical object in the scene, responsive to this resize message. The software receives from the bridge the desired object size, and controls/changes the size of the XR visual so that it is representative of the normalized object size from the bridge. Thus, an increase or decrease in the MIDI CC value results in a corresponding increase or decrease in the size of the XR visual. This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect.”); adjusting an opacity level of the visualization layer (Wehner, ¶0106, “the parameters can be used to control the color and opacity of a light as shown in 5.11. With lighting, when a movement parameter is at its minimum value either the opacity or color could be turned to its lowest value (“start” for color) and when at its maximum value, the color would be brought up to the “end” value or opacity turned up to 100%.”); adding the trail particles to the one or more video frames for a duration of the predetermined time period (Wehner, ¶0078, “This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect.” ¶0173, “(i) normalize and deliver those properties to the XR Controller, and (2) use the normalized properties to either trigger specific effects or to progressively manipulate an effect over time based on a range of numerical values that will be send multiple times per second based on the auditory properties at each timestamp.”); and presenting the rendered video with motion-audio visualization to the user (Wehner, ¶0078, “This approach may be generalized to control different visual features of the XR visual responsive to MIDI messages that convey different musical/sound attributes, like the current tempo of the song being mapped to the color (blue means slow, red means fast) or a certain MIDI note being played triggering a “flash” effect.”). As to claim 9, the combination of Wehner, Adiletta and Kim discloses a computing device for rendering motion-audio visualizations to a display, the computing device comprising: a processor; and a memory having a plurality of instructions stored thereon that, when executed by the processor, causes the computing device to: obtain video data comprising one or more video frames; obtain a visualization template from a visualization template database to be applied to each of the one or more video frames, wherein the visualization template is configured to define one or more video visualization parameters and one or more audio visualization parameters; determine a position of a target object in each of the one or more video frames, wherein the target object is configured by the visualization template; determine, based on the position of the target object in each of the one or more video frames, one or more positions of particle emitters in the corresponding video frame that are distinct from the position of the target object, wherein the particle emitters control a source of trail particles to be emitted from the target object with the one or more video visualization parameters to indicate a real-time position of the target object in the video data; obtain audio data; determine a frequency spectrum from the audio data for every predetermined time period to obtain frequency spectrum data; divide the frequency spectrum data into a predefined number of different frequency groups based on a frequency range; determine an average frequency for each frequency group for every predetermined time period, the average frequencies for different frequency groups are used to change one or more audio visualizations to be added to the respective video frames for a duration of the predetermined time period; update the one or more video visualization parameters and the one or more audio visualization parameters of the visualization template, based at least in part on the frequency spectrum data of the audio data and the path or position of the target object in the video data, by updating the one or more positions of the particle emitters to emit the trail particles, updating the one or more video visualization parameters to control a behavior of the trail particles, and applying the average frequencies for different frequency groups to update the one or more audio visualizations to control one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters; determine audio visualizations based at least in part on the updated one or more audio visualization parameters, the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period; and generate a rendered video by applying the audio visualizations at the one or more positions of particle emitters associated with the target object in the one or more video frames for the predetermined time period (See claim 1 for detailed analysis.). As to claim 11, claim 9 is incorporated and the combination of Wehner, Adiletta and Kim discloses the trail particles include graphics that are rendered based on the frequency spectrum of the audio data and are applied at the positions of particle emitters in the corresponding frame of the video data (See claim 3 for detailed analysis.). As to claim 12, claim 9 is incorporated and the combination of Wehner, Adiletta and Kim discloses the video visualization parameters are associated with the video data (See claim 4 for detailed analysis.). As to claim 13, claim 9 is incorporated and the combination of Wehner, Adiletta and Kim discloses to determine the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period comprises to control the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period (See claim 6 for detailed analysis.). As to claim 14, claim 9 is incorporated and the combination of Wehner, Adiletta and Kim discloses to generate the rendered video comprises to: generate a visualization layer with the trail particles at each position of the particle emitters; adjust an opacity level of the visualization layer; add the trail particles to the one or more video frames for a duration of the predetermined time period; and present the rendered video with motion-audio visualization to the user (See claim 8 for detailed analysis.). As to claim 15, the combination of Wehner, Adiletta and Kim discloses a non-transitory computer-readable medium storing instructions for rendering motion-audio visualizations to a display, the instructions when executed by one or more processors of a computing device, cause the computing device to: obtain video data comprising one or more video frames; obtain a visualization template from a visualization template database to be applied to each of the one or more video frames, wherein the visualization template is configured to define one or more video visualization parameters and one or more audio visualization parameters; determine a position of a target object in each of the one or more video frames, wherein the target object is configured by the visualization template; determine, based on the position of the target object in each of the one or more video frames, one or more positions of particle emitters in the corresponding video frame that are distinct from the position of the target object, wherein the particle emitters control a source of trail particles to be emitted from the target object with the one or more video visualization parameters to indicate a real-time position of the target object in the video data; obtain audio data; determine a frequency spectrum from the audio data for every predetermined time period to obtain frequency spectrum data; divide the frequency spectrum data into a predefined number of different frequency groups based on a frequency range; determine an average frequency for each frequency group for every predetermined time period, the average frequencies for different frequency groups are used to change one or more audio visualizations to be added to the respective video frames for a duration of the predetermined time period; update the one or more video visualization parameters and the one or more audio visualization parameters of the visualization template, based at least in part on the frequency spectrum data of the audio data and the path or position of the target object in the video data, by updating the one or more positions of the particle emitters to emit the trail particles, updating the one or more video visualization parameters to control a behavior of the trail particles, and applying the average frequencies for different frequency groups to update the one or more audio visualizations of the one or more graphics or effects to be applied to the trail particles with the behavior controlled by the one or more video visualization parameters; determine audio visualizations based at least in part on the updated one or more audio visualization parameters, the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period; and generate a rendered video by applying the audio visualizations at the one or more positions of particle emitters associated with the target object in the one or more video frames for the predetermined time period (See claim 1 for detailed analysis.). As to claim 17, claim 15 is incorporated and the combination of Wehner, Adiletta and Kim discloses the trail particles include graphics that are rendered based on the frequency spectrum of the audio data and are applied at the positions of particle emitters in the corresponding frame of the video data (See claim 3 for detailed analysis.). As to claim 18, claim 15 is incorporated and the combination of Wehner, Adiletta and Kim discloses the video visualization parameters are associated with the video data (See claim 4 for detailed analysis.). As to claim 19, claim 15 is incorporated and the combination of Wehner, Adiletta and Kim discloses to determine the audio visualizations be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period comprises to control the audio visualizations to be added to the one or more video frames for a duration of the predetermined time period based on the average frequency of each frequency group during the respective predetermined time period (See claim 6 for detailed analysis.). As to claim 20, claim 15 is incorporated and the combination of Wehner, Adiletta and Kim discloses to generate the rendered video comprises to: generate a visualization layer with the trail particles at each position of the particle emitters; adjust an opacity level of the visualization layer; add the trail particles to the one or more video frames for a duration of the predetermined time period; and present the rendered video with motion-audio visualization to the user (See claim 8 for detailed analysis.). As to claim 23, claim 1 is incorporated and the combination of Wehner, Adiletta and Kim discloses each frequency group is associated with one or more audio visualization parameters (Adiletta, page 4, “Now that we have all this data readily available for use, we can create the visualization. The first visualization we created was graphing all the data at once. The visualization shows four graphs: current-frequency-values, averaged-frequency-values, current-volatility-values, averaged-volatility-values. There is also a matrixctl object that indicates which triggers are active (p Visualizing).” Page 5, “Since there are 12 frequency bins, we created 12 gravity points, separated along the y axis so that the lowest frequency bin has its gravity point moving around y = -12, and the highest frequency bin has its gravity point moving around y = 10.” “The RGB value for setting the color of the particle. This is derived using the average-frequency-bin values.” “All of these parameters are set in a unique patcher for each frequency bin”). Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Wehner et al. (US Pub 2019/0005733 A1) in view of Adiletta, Matthew Joseph, and Oliver Thomas. "An artistic visualization of music modeling a synesthetic experience." arXiv preprint arXiv:2012.08034 (2020), further in view of Kim et al. (US 2020/0351450 A1), Goodrich et al. (US Pub 2021/0065464 A1), Hijabi VFX Girl (Glowing Lines Effect | Blottermedia Dance Effects (After Effects Tutorial), Youtube Video, https://www.youtube.com/watch?v=zLpdMtNi9yg, 02/17/2019), and S. Sun, R. Zhang, L. Chen and D. Li, ("Special effect simulation in virtual battlefield environment," 2010 3rd International Conference on Biomedical Engineering and Informatics, Yantai, China, 2010, pp. 2811-2815, doi: 10.1109/BMEI.2010.5640568.). As to claim 21, claim 1 is incorporated and the combination of Wehner and Adiletta does not explicitly discloses “the video visualization parameters define a particle color, a spawning rate, an initial velocity vector, and a particle lifetime”. However, particle visualization parameters are well known to include a particle color, a spawning rate, an initial velocity vector, and a particle lifetime. Sun teaches the video visualization parameters define a particle color, a spawning rate, an initial velocity vector, and a particle lifetime (Sun, page 2813, 2) Particle Attributes Assignment. “These parameters can include the spawning rate (how many particles are generated per unit time), the particles' initial velocity (the direction they are emitted upon creation), lifetime (length of time each individual particle exists before disappearing), color, transparency, size, and many more.”) Wehner, Goodrich, Hijabi and Sim are considered to be analogous art because all pertain to visual effect. It would have been obvious before the effective filing date of the claimed invention to have modified Wehner with the features of “the video visualization parameters define a particle color, a spawning rate, an initial velocity vector, and a particle lifetime” as taught by Sun. The suggestion/motivation would have been Emitter is also responsible for assigning initial attributes to newly generated particles (Sun, Page 2813). All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 22, claim 1 is incorporated and the combination of Wehner and Adiletta does not disclose the target object is a body part of one or more individuals in the one or more video frames. Goodrich teaches trail particles to be emitted from the target object with one or more video visualization parameters, wherein the target object is a body part of one or more individuals in the one or more video frames (Goodrich in Fig. 18-21, ¶0021-0024, ¶0230, “a controlled particle system (e.g., animated projectile)” ¶0231, “the animation of the controlled particle system changes and the attachments are moved in response to movement data.” The figures show particles are emitted from a person.). Goodrich further teaches “one or more positions of particle emitters in the corresponding video frame that are away from the position of the target object” and “the particle emitters control a source of trail particles with one or more video visualization parameters” (Goodrich, Fig. 12 and Fig. 16, ¶0058, “The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element “ ¶0095, “enables different particles and layers to be placed in different positions for each user who views such particles and layers.” ¶0196, “the 3D effects 1260 are particle-based effects that are rendered spatially and are moving in response to sensor information (e.g., gyroscopic data, and the like) on the viewer's electronic device “ “3D object rendered in proximity to facial image data from the image data.” ¶0224, “3D effects illustrating particles, a reflection on a graphical object (e.g., glasses), and a 3D attachment that are rendered in response to movement data (e.g., motion data from a gyroscopic sensor), and an example of 3D effects illustrating post effects and a dynamic 3D attachment that are rendered in response to movement data” ¶0230, “3D effects illustrating a controlled particle system (e.g., animated projectile), and 2D and 3D attachments that are rendered in response to movement data”). Wehner and Goodrich are considered to be analogous art because all pertain to visual effect. It would have been obvious before the effective filing date of the claimed invention to have modified Wehner with the features of “trail particles to be emitted from the target object with one or more video visualization parameters, wherein the target object is a body part of one or more individuals in the one or more video frames” as taught by Goodrich. The suggestion/motivation would have been in order to have visual effects apply to a media content item (Goodrich, ¶0046). In addition, Hijabi teaches trail particles to be emitted from the target object with one or more video visualization parameters, wherein the target object is a body part of one or more individuals in the one or more video frames (Start of the video, PNG media_image1.png 1365 1643 media_image1.png Greyscale Wehner, Goodrich and Hijabi are considered to be analogous art because all pertain to visual effect. It would have been obvious before the effective filing date of the claimed invention to have modified Wehner with the features of “trail particles to be emitted from the target object with one or more video visualization parameters, wherein the target object is a body part of one or more individuals in the one or more video frames” as taught by Hijabi. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/ Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jun 21, 2021
Application Filed
Sep 30, 2022
Non-Final Rejection — §103
Dec 27, 2022
Response Filed
Jan 09, 2023
Final Rejection — §103
Mar 13, 2023
Response after Non-Final Action
Mar 18, 2023
Response after Non-Final Action
Apr 13, 2023
Request for Continued Examination
Apr 18, 2023
Response after Non-Final Action
May 10, 2023
Non-Final Rejection — §103
Aug 14, 2023
Response Filed
Sep 26, 2023
Final Rejection — §103
Dec 01, 2023
Response after Non-Final Action
Jan 02, 2024
Request for Continued Examination
Jan 09, 2024
Response after Non-Final Action
Feb 10, 2024
Non-Final Rejection — §103
May 13, 2024
Response Filed
Jul 15, 2024
Final Rejection — §103
Sep 19, 2024
Response after Non-Final Action
Sep 23, 2024
Response after Non-Final Action
Oct 18, 2024
Request for Continued Examination
Oct 22, 2024
Response after Non-Final Action
Nov 18, 2024
Non-Final Rejection — §103
Feb 24, 2025
Response Filed
Mar 01, 2025
Final Rejection — §103
May 01, 2025
Response after Non-Final Action
Jun 03, 2025
Request for Continued Examination
Jun 04, 2025
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103
Dec 15, 2025
Response Filed
Jan 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month