Prosecution Insights
Last updated: April 19, 2026
Application No. 17/303,658

Augmented Reality Filters for Captured Audiovisual Performances

Final Rejection §103§DP
Filed
Jun 03, 2021
Examiner
SCHREIBER, CHRISTINA MARIE
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Smule Inc.
OA Round
4 (Final)
80%
Grant Probability
Favorable
5-6
OA Rounds
2y 4m
To Grant
96%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
768 granted / 963 resolved
+11.8% vs TC avg
Strong +16% interview lift
Without
With
+16.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
33 currently pending
Career history
996
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
34.6%
-5.4% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-29 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 and 10-35 of copending Application No. 17/299295. Although the claims at issue are not identical, they are not patentably distinct from each other because the combination of claims 1 and 2 in both applications recite almost identically the same limitations. The only difference is which elements are introduced in claim 1 first. The remaining dependent claims recite the same limitations. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claims The amendments to claims 1, 4, 12, 16-19, 29-30 and 33 have been accepted. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 14-23, 29, 30 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over the US patent application publication to Hamalainen et al. (US 2018/0336871) in view of the Korean publication to Lee (KR 2017/0138135 A) (English translation provided by the Examiner), further in view of the US patent application publications to Hung (US 2007/0157795) and Patten et al. (US 2007/0089152). In terms of claim 1, Hamalainen et al. teaches a method comprising: receiving input information regarding a music track and an instrument; determining attribute information of a music track based on received input information, the attribute information comprising data for a user to play a music track with an instrument; receiving real time content of audiovisual (AV) input signals using at least one capturing device; and generating augmented reality (AR) instruction information based on attribute information of a music track (see claim 1). The subject matter of claim 1 differs from that of Hamalainen et al. in augmenting a rendering of an audiovisual performance with plural applied visual effects, wherein visual scale, movement in a visual field, timing, color, or intensity of at least one of applied visual effects is based on an element of musical structure coded in, or computationally-determined from a temporally-synchronized score or lyrics. However, the implementation of this limitation within the method of Hamalainen et al. would have been obvious to one of ordinary skill in the art at the time of the effective filing date, given Lee discloses a similar method comprising said limitation (see paragraph [0038]; and claim 9: an augmented reality-based piano performance assistant device 100 displays lyrics on a top of a virtual keyboard when lyrics are included in performance information; and a piano performance assistant information providing unit overlays note symbols of different colors on a keyboard to be pressed by a right hand and a virtual keyboard to be pressed by a left hand among virtual keys loaded into an augmented reality environment based on the performance information). The subject matter of claim 1 differs from that of Hamalainen et al. and Lee in the plural applied visual effects being coded in a visual effects schedule. Hung discloses generating a visualizing map (see paragraph [0020], “every segment could be allocated with some visualizing expression, and the visualizing map records the distribution...The visualizing map is constituted by the visualizing expressions allocated to all segments of music” (i.e. selectable as a collective set of mappings). A segment allocated with visualizing expressions recorded in the visualizing map reads on the visual effects schedule and set of selectable visual effects). The implementation of this limitation within the combined method of Hamalainen et al. and Lee would have been obvious to one of ordinary skill in the art at the time of the effective filing date, given doing so would have been allowing a user to sort, search or classify the music in a more convenient way. Other examples in the art of well-known sorting and storing of visual effects can be seen in the US patent application publications to Sung et al. (US 2016/0358595) (see paragraphs [0033], mapped sequence of visual layouts, [0034] and [0036], mapped to cell), Steinwedel et al. (US 2018/0374462) (see paragraphs [0030], [0032], [0083], [0084], map to visual layouts, [0040], [0041], [0059]-[0062], effects schedules, [0058], [0061] and [0081]-[0084], recipe), and Holmberg et al. (US 2019/0306540) (see paragraphs [0021], [0022] and [0028], visual effects schedule). In particular, the US patent application publication to Fleischhauer et al. (US 2013/0125000) discloses a set of visual effects selectable as a collective set of mappings (Fleischhauer, [0317], "When rendering the presentation, the media-editing application identifies the active video angle and uses the effects stack corresponding to this active angle". In addition, in paragraph [0382], "application of video or audio effects to an active angle (pixel modification effects, transforms, distortions, etc.), trim operations (ripple, roll, slip, slide), and compositing operations using multiple lanes (blending, picture in picture, etc.), among other operations"). This is another example, similar to Hung, which teaches a collective mapping, wherein the use of an effect stack as presented by Fleischhauer, allows the user to set the entire stack at once, instead of applying effects one by one to every video segment. The subject matter of claim 1 differs from that of Hamalainen et al., Lee and Hung in the plural applied visual effects being selected from the visual effects schedule that’s defines a set of visual effects selectable for respective segments. Patten et al. discloses selected from a graphical user interface that defines a set of visual effects selectable for video segment (see paragraph [0032], “the graphical user interface 500 provides a video effects menu 502 and a storyboard window 504 for creating the video effect clip 340...the user performs a drag and drop operation to add a particular video effect 507 from the menu 502 to the timeline 506”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to use the concept that Patten’s multiple visual effects can be selected and added into a video segment to allocate Hamalainen et al. as modified by Lee and Hung’s visual effects for a segment. The motivation for doing so would have been enabling users to create complicated video effects quickly and easily. Other examples in the art of well-known selectable visual effects schedules can be seen in the US patent application publications to Steinwedel et al. (US 2018/0374462) (see paragraphs [0040], [0041], [0059]-[0061] and [0066],selectable visual effects schedule, applying recipes from a selected visual effects schedule, effects applied to respective portions/segments), and Holmberg et al. (US 2019/0306540) (see paragraph [0022], visual effects schedule temporally selective). As for claims 2 and 5, Lee further teaches the limitations of claims 2 and 5 (see paragraph [0038]; and claim 9: an augmented reality-based piano performance assistant device 100 displays lyrics on a top of a virtual keyboard when lyrics are included in performance information; and a piano performance assistant information providing unit overlays note symbols of different colors on a keyboard to be pressed by a right hand and a virtual keyboard to be pressed by a left hand among virtual keys loaded into an augmented reality environment based on performance information). Therefore, obviousness stands. As for claims 3, 4, 21-23 and 29, Hamalainen et al. further teaches the limitations as recited in these claims (see claim 1: generating augmented reality (AR) instruction information based on an attribute information of a music track). Therefore, obviousness stands. As for claims 14-19, Hamalainen et al. further teaches the limitations as recited in these claims (see paragraph [0208]; and figure 7b: "For example, the first layer of the augmented reality (AR) instruction information comprises correct hand pose visualized so that the user can match it. Visualized bone lengths etc. can be estimated from user's hand by using computer vision of the capturing device, for example"). Therefore, obviousness stands. As for claim 20, Hamalainen et al. further teaches the limitations as recited in this claim (see paragraph [0208]: "The system receives a signal input 510 from a microphone or some other input, such as a camera. The signal is then converted 520 into parameter data. This data shows e.g. information on the frequency components of the signal and their amplitudes, i.e. pitch and salience, and it may also include information on timing, volume, duration, style of playing (like staccato) or up vs. down strumming in guitar. Further, parameter data may include melody, harmony, rhythm, tempo, meter, articulation, dynamics and the sonic qualities of timbre and texture"). Therefore, obviousness stands. In terms of claim 30, Hamalainen et al., Lee, Hung and Patten again teach the similar limitations are presented above in claim 1, including the invention Hamalainen et al. implemented within a system comprising at least a guest and host pairing of network-connected devices configured to capture a least vocal audio (see paragraph [0092]: "The user device 110 comprises a client application 111 and the capturing device 160 and/or the instrument 120 may just accept an invite (sent via a communication link locally or remotely over network 150) to join a session, for example"). In terms of claim 33, Hamalainen et al., Lee, Hung and Patten again teach the similar limitations are presented above in claim 1, including the invention Hamalainen et al. implemented within a system comprising a geographically distributed set of network connected devices configured to capture audiovisual performances including vocal audio with performance synchronized video (see paragraph [0092]: "The user device 110 comprises a client application 111 and the capturing device 160 and/or the instrument 120 may just accept an invite (sent via a communication link locally or remotely over network 150) to join a session, for example"). Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Hamalainen et al. in view of Lee, Hung and Patten as applied to claim 1 above, and further in view of the US patent application publication to Bernstein et al. (US 2016/0277802). Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the additional features of claims 6-9, however, Bernstein et al. does teach said limitations (see paragraph [0056]; and figure 2c: "The three viewers have provided engagements in the form of signals of appreciation represented by icons 250 and comments 255. The icons 250 are an example of a type of engagement representation. The social media server may provide indications of the engagements and the video interaction engine may trigger display of the representations of the engagement representations, such as icons 250 and comments 255"). Therefore, applying social media type aspects to the method of Hamalainen et al., or implementing the method on, or uploading to, a social network setting for communal interaction would have been obvious to one of ordinary skill in the art, at the time of the effective filing date, and is quite well known in the art today. Claims 10-13, 24-28 and 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over Hamalainen et al. in view of Lee, Hung and Patten as applied to claim 1 above, and further in view of the US patent application publication to Godfrey et al. (US 2018/0262654). Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the limitations of claim 10. However, Godfrey et al. does disclose said limitations (see paragraph [0012]: "In some embodiments of the present invention, a method of preparing coordinated audiovisual performances from geographically distributed performer contributions includes receiving via a communication network, a first audiovisual encoding of a first performer, including first performer vocals captured at a first remote device and receiving via the communication network, a second audiovisual encoding of a second performer, including second performer vocals captured at a second remote device"). Therefore, implementing the method of Hamalainen et al. as a well-known karaoke style and allowing for communal contributions would have been obvious to one of ordinary skill in the art. Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the limitations of claim 11. However, Godfrey et al. does disclose said limitations (see paragraph [0033]: "In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track"). Therefore, it would have again been obvious to adapt the method of Hamalainen et al. for correlation with well-known karaoke style rendering. Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the limitations of claims 12 and 13. However, Godfrey et al. does disclose said limitations (see paragraph [0050]: "Finally, at position 193 along coordinated audiovisual performance timeline 151, calculated levels of an operative computationally-defined audio feature(s) are such that performance synchronized video of first and second performers is displayed with equivalent visual prominence. Position 193 illustrates a dynamically determined prominence consistent with each of the performers singing in chorus ( consistent with a chorus section of an otherwise part A, part B duet-style coding of a vocal score) and/or singing at generally comparable levels as indicated by calculations of audio power, spectral flux or centroids"). Therefore, providing communal interactions within the method of Hamalainen et al. would have again been obvious. Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the limitations of claims 24-28. However, Godfrey et al. does disclose said limitations (see claim 2: "A method of preparing coordinated audiovisual performances from geographically distributed performer contributions, the method comprising: receiving via a communication network, a first audiovisual encoding of a first performer, including first performer vocals captured at a first remote device and first performer video"). Therefore, implementing network style communal interaction within the method of Hamalainen et al. would have again been obvious. Hamalainen et al., Lee, Hung and Patten fail to explicitly teach the limitations of claims 31-32. However, Godfrey et al. does disclose said limitations (see paragraph [0034]: "Contributions of multiple vocalists are coordinated and mixed m a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists"). Therefore, implementing network style communal interaction within the system of Hamalainen et al. would have again been obvious. Claims 1, 2, 5-11, 16, 20 and 24-28 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), further in view of the US patent application publications to Hung (US 2007/0157795) and Patten et al. (US 2007/0089152). In terms of claims 1 and 2, Rivera et al. discloses a method (see paragraph [0023], “a method of recording a karaoke performance”) comprising: accessing a computer readable encoding of an audiovisual performance captured (see paragraph [0089], “Audiovisual signals are captured as a song is sung...The performer's audio may be digitally or otherwise overlaid or combined with the background music, as that is available via the karaoke jukebox system itself. Images and/or video may be captured by a camera mounted on the karaoke jukebox system...the audio and image(s) and video(s) may be synced together... The combined video and/or audio may be optionally uploaded”) in connection with a temporally-synchronized backing track (see paragraph [0089], “the background music”), score and lyrics (see paragraph [0068], “Metadata may be associated with songs in the karaoke database or catalog. Such metadata information may include, for example, lyrics of a song, rated difficulty”. In addition, in paragraph [0024], “Audiovisual data captured from a user device is received, with the audiovisual data including first audio data and first video data...The first audio data and the audio-only data are digitally combined such that the first audio data is at least partially replaced with the audio-only data in order to produce a new audiovisual data file with user-generated video content synchronized with high-quality audio content based on a common time reference value”). However, though Rivera teaches a rendering of the audiovisual performance (see Fig. 17d); Rivera does not explicitly disclose “augmenting a display with plural applied visual effect”, or “an element of musical structure coded in”. Na et al. discloses augmenting a display with plural applied visual effect (see paragraph [0064], “The special effect adding module 220 can select the particular special effect that is displayed according to the analyzed music information”). Na discloses wherein visual scale, movement in a visual field, timing, color, or intensity of at least one of the applied visual effects is based on an audio feature computationally extracted from music play information (see paragraph [0075], “The play information of the music file can pertain to music that is encoded in the music file. By way of example, the play information may include at least one of a tone, a volume level, a pitch, a rhythm, a tempo, a meter, and a texture of the music file”. In addition, in paragraphs [0076-0077], “The electronic device can extract a feature value or a pattern of the acquired music attribute information or music play information, and analyze it using a mathematical algorithm...the electronic device can output a visual effect based on the music information...the electronic device can change at least one of a brightness, a color, and a chroma setting of the display based on the music attribute information or the music play information of the analyzed music information”). Na discloses an element of musical structure coded in (see paragraph [0075], “The play information of the music file can pertain to music that is encoded in the music file. By way of example, the play information may include at least one of atone, a volume level, a pitch, a rhythm, a tempo, a meter, and a texture of the music file”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to perform Rivera’s audiovisual data with user-generated video content synchronized with audio content using Na’s visual effect through a display of the electronic device based on the music information, as it could be used to achieve the predictable result of augmenting a rendering of the audiovisual performance with plural applied visual effects, wherein visual scale, movement in a visual field, timing, color, or intensity of at least one of the applied visual effects is based on an audio feature computationally extracted from the audiovisual performance or from the temporally-synchronized backing track. The motivation for doing so would have been allowing a user to select his/her desired special effect, and set various performances based on a visual, aural, or tactile sense. The subject matter of amended claim 1 differs from that of Rivera and Na et al. in the plural applied visual effects being coded in a visual effects schedule. Hung discloses generating a visualizing map (see paragraph [0020], “every segment could be allocated with some visualizing expression, and the visualizing map records the distribution...The visualizing map is constituted by the visualizing expressions allocated to all segments of music” (i.e. selectable as a collective set of mappings). A segment allocated with visualizing expressions recorded in the visualizing map reads on the visual effects schedule and set of selectable visual effects). The implementation of this limitation within the combined method of Rivera and Na et al. would have been obvious to one of ordinary skill in the art at the time of the effective filing date, given doing so would have been allowing a user to sort, search or classify the music in a more convenient way. Other examples in the art of well-known sorting and storing of visual effects can be seen in the US patent application publications to Sung et al. (US 2016/0358595) (see paragraphs [0033], mapped sequence of visual layouts, [0034] and [0036], mapped to cell), Steinwedel et al. (US 2018/0374462) (see paragraphs [0030], [0032], [0083], [0084], map to visual layouts, [0040], [0041], [0059]-[0062], effects schedules, [0058], [0061] and [0081]-[0084], recipe), and Holmberg et al. (US 2019/0306540) (see paragraphs [0021], [0022] and [0028], visual effects schedule). In particular, the US patent application publication to Fleischhauer et al. (US 2013/0125000) discloses a set of visual effects selectable as a collective set of mappings (Fleischhauer, [0317], "When rendering the presentation, the media-editing application identifies the active video angle and uses the effects stack corresponding to this active angle". In addition, in paragraph [0382], "application of video or audio effects to an active angle (pixel modification effects, transforms, distortions, etc.), trim operations (ripple, roll, slip, slide), and compositing operations using multiple lanes (blending, picture in picture, etc.), among other operations"). This is another example, similar to Hung, which teaches a collective mapping, wherein the use of an effect stack as presented by Fleischhauer, allows the user to set the entire stack at once, instead of applying effects one by one to every video segment. The subject matter of claim 1 differs from that of Rivera, Na et al. and Hung in the plural applied visual effects being selected from the visual effects schedule that’s defines a set of visual effects selectable for respective segments. Patten et al. discloses selected from a graphical user interface that defines a set of visual effects selectable for video segment (see paragraph [0032], “the graphical user interface 500 provides a video effects menu 502 and a storyboard window 504 for creating the video effect clip 340...the user performs a drag and drop operation to add a particular video effect 507 from the menu 502 to the timeline 506”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to use the concept that Patten’s multiple visual effects can be selected and added into a video segment to allocate Rivera as modified by Na and Hung’s visual effects for a segment. The motivation for doing so would have been enabling users to create complicated video effects quickly and easily. Other examples in the art of well-known selectable visual effects schedules can be seen in the US patent application publications to Steinwedel et al. (US 2018/0374462) (see paragraphs [0040], [0041], [0059]-[0061] and [0066],selectable visual effects schedule, applying recipes from a selected visual effects schedule, effects applied to respective portions/segments), and Holmberg et al. (US 2019/0306540) (see paragraph [0022], visual effects schedule temporally selective). As for claim 5, Rivera discloses a performance synchronized presentation of text from the lyrics (see paragraph [0133], “a performance synchronized presentation of text from the lyrics”); audiovisual performance (see Fig. 17d). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses an audio feature extracted from the music play information (see Na paragraph [0075], “The play information of the music file can pertain to music that is encoded in the music file. By way of example, the play information may include at least one of a tone, a volume level, a pitch, a rhythm, a tempo, a meter, and a texture of the music file”. In addition, in paragraphs [0076-0077], “The electronic device can extract a feature value or a pattern of the acquired music attribute information or music play information, and analyze it using a mathematical algorithm...the electronic device can output a visual effect based on the music information...the electronic device can change at least one of a brightness, a color, and a chroma setting of the display based on the music attribute information or the music play information of the analyzed music information”). Therefore, obviousness stands. As for claim 6, Rivera discloses control, at least in part, on a received input from a member of an audience to which the audiovisual performance is streamed (see paragraph [0074], “The KJ may use dedicated buttons, menus, or the like, to cause encouraging messages to be displayed to a display visible by the performer and/or the audience. For instance, the KJ may trigger random "good" or encouraging messages, and "bad" or taunting messages to be displayed by pressing the good and bad message buttons 406 and 408”). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses visual effect is controlled (see Na paragraph [0064], “The special effect adding module 220 can select the particular special effect that is displayed according to the analyzed music information”). As for claim 7, Rivera discloses receiving a like/love or upvote/downvote indication from the member of the audience (see paragraph [0074], “The KJ may use dedicated buttons, menus, or the like, to cause encouraging messages to be displayed to a display visible by the performer and/or the audience. For instance, the KJ may trigger random "good" or encouraging messages, and "bad" or taunting messages to be displayed by pressing the good and bad message buttons 406 and 408”). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses presenting the applied visual effect (see Na paragraph [0064], “The special effect adding module 220 can select the particular special effect that is displayed according to the analyzed music information”). As for claim 8, Rivera discloses receiving chat traffic from at least one member of the audience, based on content of the received chat traffic, presenting a score (see paragraph [0105], “Feedback may be produced automatically (e.g., in the case of pitch meter or the like, cheers, etc.), based on patron-specified messages (e.g., sent via text, email, through a mobile application running on a mobile device of the patron, etc.), KJ provided messages, etc. A score may be calculated in step S910, e.g., based in part on the scoring metrics”). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses presenting the applied visual effect (see Na paragraph [0064], “The special effect adding module 220 can select the particular special effect that is displayed according to the analyzed music information”). Regarding claim 9, Rivera discloses visually presents content or keywords from the received chat traffic (see paragraph [0073], “play prerecorded applause or cheering, display encouraging or amusing comments on the video display systems”. In addition, in paragraph [0105], “Feedback may be provided during and/or after the performance in step S908. The feedback may be encouraging or taunting comments, instructions or other visual indications to sing higher or lower (e.g., using a pitch meter or the like), generated applause or cheers or the like, etc.”). As for claim 10, Rivera discloses receiving the accessed encoding, via a communications network (see paragraph [0232], “This link to the new processed video may be available online for streaming or can be downloaded”. Streaming or can be downloaded is considered receiving the accessed encoding), from a remote portable computing device at which the audiovisual performance was captured in connection with a karaoke-style audible rendering (see paragraph [0082], “FIG. 6 is a flowchart showing an illustrative process for logging into a karaoke jukebox, selecting on a display or with a remote control a song to be performed, and optionally with a communication arrangement uploading data to a social networking site”) of the temporally-synchronized backing track (see paragraph [0085], “enable the creation of a "mixed performance" that accepts audio from the karaoke jukebox microphone(s) input(s), as well as the backing music audio track”), and visual presentation of the temporally-synchronized lyrics (see paragraph [0157], “This video screen may perform a face tracking alignment and show the singer's performance superimposed with the lyrics”) and of pitch cues in correspondence with the temporally-synchronized score (see paragraph [0105], “e.g., using a pitch meter or the like), generated applause or cheers or the like, etc. Feedback may be produced automatically (e.g., in the case of pitch meter or the like, cheers, etc.), based on patron-specified messages (e.g., sent via text, email, through a mobile application running on a mobile device of the patron, etc.), KJ provided messages, etc. A score may be calculated in step S910, e.g., based in part on the scoring metrics. The scoring metrics may, for instance, determine how many points are to be awarded for singing a note within a specified range of an expected pitch, singing a word or series of words or beat-boxing or the like at appropriate or expected times”). As for claim 11, Rivera discloses capturing the audiovisual performance (see paragraph [0089], “Audiovisual signals are captured as a song is sung...The performer's audio may be digitally or otherwise overlaid or combined with the background music, as that is available via the karaoke jukebox system itself. Images and/or video may be captured by a camera mounted on the karaoke jukebox system...the audio and image(s) and video(s) may be synced together... The combined video and/or audio may be optionally uploaded”) in connection with a temporally-synchronized backing track (see paragraph [0089], “the background music”) in connection with a karaoke-style audible rendering (see paragraph [0082], “FIG. 6 is a flowchart showing an illustrative process for logging into a karaoke jukebox, selecting on a display or with a remote control a song to be performed, and optionally with a communication arrangement uploading data to a social networking site”) of the temporally-synchronized backing track (see paragraph [0085], “enable the creation of a "mixed performance" that accepts audio from the karaoke jukebox microphone(s) input(s), as well as the backing music audio track”), and visual presentation of the temporally-synchronized lyrics (see paragraph [0157], “This video screen may perform a face tracking alignment and show the singer's performance superimposed with the lyrics”) and of pitch cues in correspondence with the temporally-synchronized score (see paragraph [0105], “e.g., using a pitch meter or the like), generated applause or cheers or the like, etc. Feedback may be produced automatically (e.g., in the case of pitch meter or the like, cheers, etc.), based on patron-specified messages (e.g., sent via text, email, through a mobile application running on a mobile device of the patron, etc.), KJ provided messages, etc. A score may be calculated in step S910, e.g., based in part on the scoring metrics. The scoring metrics may, for instance, determine how many points are to be awarded for singing a note within a specified range of an expected pitch, singing a word or series of words or beat-boxing or the like at appropriate or expected times”). As for claim 16, Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses color, saturation or contrast (see Na paragraph [0063], “the special effect adding module 220 can change at least one of a brightness, a color, and a chroma setting of the display”). As for claim 20, Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses tempo of a backing audio track (see Na paragraph [0057], “a tone, a volume level, a pitch, a rhythm, a tempo, a meter, and a texture of the music”). As for claim 24, Rivera discloses performed, at least in part, on a content server or service platform to which geographically-distributed, network-connected, vocal capture devices are communicatively coupled (see Rivera, Fig. 2 shows local server to which geographically-distributed, network-connected, vocal capture devices (jukebox system) are communicatively coupled). As for claim 25, Rivera discloses performed, at least in part, on a network-connected, vocal capture device communicatively coupled to a content server or service platform (see Rivera, Fig. 2 shows performed, at least in part, on a network-connected, vocal capture device (jukebox system) communicatively coupled to a content server or service platform). As for claim 26, Rivera discloses performed, at least in part, on a network-connected, vocal capture device communicatively coupled as a host device to at least one other network- connected, vocal capture device operating as a paired guest device (see paragraph [0070], “The network interface 318 of the karaoke jukebox system 302 also may accommodate connections to patrons’ mobile devices 320”. Karaoke jukebox system as a host device to at least one other network-connected, vocal capture device (mobile devices) operating as a paired guest device). As for claim 27, Rivera discloses embodied, at least in part, as a computer program product encoding of instructions executable on a content server or service platform to which a plurality of geographically-distributed, network-connected, vocal capture devices are communicatively coupled (see paragraph [0025], “non-transitory computer readable storage mediums tangibly store programs that, when executed, implement these and/or other methods”. Rivera Fig. 2 shows local server to which geographically-distributed, network-connected, vocal capture devices (jukebox system) are communicatively coupled). As for claim 28, Rivera discloses embodied, at least in part, as a computer program product encoding of instructions executable on a network-connected, vocal capture device on which the audiovisual performance is audibly and visually presented to a human user (see paragraph [0025], “non- transitory computer readable storage mediums tangibly store programs that, when executed, implement these and/or other methods”. In addition, in paragraph [0061], “The jukeboxes may also receive and store data constituting images (e.g., still and/or moving video and/or graphical images) that can be displayed on the display 18 of the jukebox device 16”. Rivera Fig. 2 shows local server to which geographically-distributed, network-connected, vocal capture devices (jukebox system) on which the audiovisual performance is audibly (Fig. 3) and visually presented to a human user). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses augmented rendering (see Na paragraph [0063], “the special effect adding module 220 can change at least one of a brightness, a color, and a chroma setting of the display”). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), Hung (US 2007/0157795) and Patten (US 2007/0089152), as applied to claim 1, in further view of The et al. (US 2016/0027210). As for claim 12, Rivera discloses capturing a second audiovisual performance in connection with a karaoke-style visual presentation of the temporally-synchronized lyrics (see paragraph [0073], “a control system may be used to move a nervous performer back in the queue, raise or lower the volume for a particular performer”. In addition, in paragraph [0024], “Audiovisual data captured from a user device is received, with the audiovisual data including first audio data and first video data...The first audio data and the audio-only data are digitally combined such that the first audio data is at least partially replaced with the audio-only data in order to produce a new audiovisual data file with user-generated video content synchronized with high-quality audio content based on a common time reference value”. In addition, in paragraph [0133], “synchronized lyrics being presented on the karaoke jukebox device”), the captured second audiovisual performance including performance synchronized video of a second performer (see paragraph [0024], “Audiovisual data captured from a user device is received, with the audiovisual data including first audio data and first video data...The first audio data and the audio-only data are digitally combined such that the first audio data is at least partially replaced with the audio-only data in order to produce a new audiovisual data file with user-generated video content synchronized with high-quality audio content based on a common time reference value”. In addition, in paragraph [0133], “synchronized lyrics being presented on the karaoke jukebox device”). Further, Rivera discloses a first audiovisual performance including performance synchronized video of a first performer to produce the accessed audiovisual performance (see paragraph [0024], “Audiovisual data captured from a user device is received, with the audiovisual data including first audio data and first video data...The first audio data and the audio-only data are digitally combined such that the first audio data is at least partially replaced with the audio-only data in order to produce a new audiovisual data file with user-generated video content synchronized with high-quality audio content based on a common time reference value”), Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses the augmentation with the plural applied video effects, coded in a visual effects schedule, is applied to visuals detected in the visual field (see Na paragraph [0064], “The special effect adding module 220 can select the particular special effect that is displayed according to the analyzed music information”; and Hung, paragraph [0020], “every segment could be allocated with some visualizing expression, and the visualizing map records the distribution...The visualizing map is constituted by the visualizing expressions allocated to all segments of music”. A segment allocated with a visualizing expression recorded in the visualizing map reads on the visual effects schedule). Rivera as modified by Na, Hung and Patten however does not explicitly disclose “compositing the captured second audiovisual performance with the first audiovisual performance”. The et al. discloses compositing a captured second video with a first video (see paragraph [0079], “a composite video that combines the generated first video and the captured second video”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to incorporate the concept of The’s compositing two videos in the entertainment system, as taught by Rivera as modified by Na, Hung and Patten. The motivation for doing so would have been providing ability to overlay a first video on a second video to generate a composite video. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), Hung (US 2007/0157795) and Patten (US 2007/0089152), as applied to claim 1, in further view of Aiba (US 2017/0041556). As for claim 13, Rivera discloses the captured first and second audiovisual performances present (see paragraph [0026], “enable a karaoke performer to participate in a karaoke performance in which the karaoke performer sings a song through a first microphone connected to the digital jukebox device that is playing the song”). Rivera as modified by Na, Hung and Patten teaches the augmentation; however, Rivera as modified by Na, Hung and Patten does not explicitly disclose “compositing, as a duet”. Aiba discloses compositing, as a duet (see paragraph [0028], “the reception terminal 20 displays the enlarged video of the participant B and the participant D on the display 212 as illustrated in FIG. 1B”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to incorporate the concept of Aiba’s compositing video of participants in the entertainment system, as taught by Rivera as modified by Na, Hung and Patten. The motivation for doing so would have been allowing users of different terminals at different locations to communicate by simultaneous two- way video and audio transmissions. Claims 14, 15, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), Hung (US 2007/0157795) and Patten (US 2007/0089152), as applied to claim 1, in further view of Van Os et al. (US 10,270,983). As for claim 14, Rivera teaches a vocal performer of the captured audiovisual performance (see paragraph [0023], “recording a karaoke performance in which a karaoke performer sings a song through a first microphone connected to a jukebox that is playing the song is provided”). Rivera as modified by Na, Hung and Patten with the same motivation form claim 1 discloses dynamically rendered visual augmentations (see Na paragraph [0077], “the electronic device can change at least one of a brightness, a color, and a chroma setting of the display based on the music attribute information or the music play information of the analyzed music information”). Rivera as modified by Na, Hung and Patten does not however explicitly disclose “augmentations to face”. Van Os et al. discloses rendered visual augmentations to face of a user detected in a visual field of the captured video (see column 40, lines 48-50, “As shown in FIG. 6G, robot avatar option 630-3 is positioned in selection region 629, which indicates robot avatar option 630-1 is selected”. Fig. 6G). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to incorporate the concept of Van Os’s rendered visual augmentations to the face of a user in the entertainment system, as taught by Rivera as modified by Na, Hung and Patten. The motivation for doing so would have been providing a virtual avatar visual effect applied to a representation of the subject in image display region. As for claim 15, Rivera teaches the vocal performer of the captured audiovisual performance. Rivera as modified by Na, Hung, Patten and Van Os with the same motivation from claim 14 discloses presentation of a visual avatar for a user detected in the visual field of the captured video (see Van Os, Figs. 8D-8R). As for claim 17, Rivera teaches a vocal performer. Rivera as modified by Na, Hung and Patten teaches applied visual effect. Rivera as modified by Na, Hung, Patten and Van Os with the same motivation from claim 14 discloses a user detected in the visual field (see Van Os, Figs. 8D-8R). As for claim 19, Rivera as modified by Na, Hung, Patten and Van Os with the same motivation from claim 14 discloses a visually overlaid synthetic foreground (see Van Os, Figs. 8D-8R). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), Hung (US 2007/0157795) and Patten (US 2007/0089152), as applied to claim 1, in further view of Belimpasaki et al. (US 2012/0218296). As for claim 18, Rivera discloses presenting a performance synchronized second vocal performer (see paragraph [0073], “a control system may be used to move a nervous performer back in the queue, raise or lower the volume for a particular performer”. In addition, in paragraph [0024], “Audiovisual data captured from a user device is received, with the audiovisual data including first audio data and first video data...The first audio data and the audio-only data are digitally combined such that the first audio data is at least partially replaced with the audio-only data in order to produce a new audiovisual data file with user-generated video content synchronized with high-quality audio content based on a common time reference value”). Rivera as modified by Na, Hung and Patten teaches dynamically rendered visual augmentation. However, Rivera as modified by Na, Hung and Patten does not explicitly disclose “detected reflective surface”. Belimpasaki et al. discloses detecting a reflective surface (see paragraph [0107], “As illustrated in FIG. 8B, the windows of rows 807a-807g of building 805 are used to create a game (e.g. such as Tetris) wherein the virtual display of windows can have different colors”). Belimpasaki further discloses visual augmentation presenting a virtual display as a reflection in the detected reflective surface (see paragraph [0107], “As illustrated in FIG. 8B, the windows of rows 807a- 807g of building 805 are used to create a game (e.g. such as Tetris) wherein the virtual display of windows can have different colors”. Fig. 8B). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to incorporate the concept of Belimpasaki’s overlaying of different colors on the windows in the entertainment system, as taught by Rivera as modified by Na, Hung and Patten, as it could be used to achieve the predictable result of dynamically rendered visual augmentation of a detected reflective surface or a synthetic augmentation of the captured audiovisual performance to include an apparent reflective surface, wherein the dynamically rendered visual augmentation presents a performance synchronized second vocal performer visuals as an apparent reflection in the detected or apparent reflective surface. The motivation for doing so would have been to allow for the association of content to a location. Claims 3, 4, 21-23 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera et al. (US 2013/0070093) in view of Na et al. (US 2016/0035323), Hung (US 2007/0157795) and Patten (US 2007/0089152), as applied to claim 1, in further view of Cohen et al. (US 2011/0126103). As for claim 21, Rivera teaches a vocal audio track of the audiovisual performance encoding (see paragraph [0089], “Audiovisual signals are captured as a song is sung...The performer's audio maybe digitally or otherwise overlaid or combined with the background music, as that is available via the karaoke jukebox system itself”). Rivera as modified by Na, Hung and Patten teaches computationally extracted audio feature. However, Rivera as modified by Na, Hung and Patten does not explicitly disclose “segmenting”. Cohen et al. discloses segmenting (see paragraph [0035], “FIG. 1b is a screen-shot illustrating the karaoke collage organization procedure which is sent to the initiator, for dividing up and allocating segments of a particular song among various karaoke participants”). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to process Rivera as modified by Na’s audio data using the teaching of Cohen for dividing up and allocating segments of a particular song. The motivation for doing so would have been allowing a user to allocate segments of a selected song to various singers separated by distance, and to automatically combine all recordings performed of these segments to form a single karaoke performance. As for claim 22, Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses a computational determination of vocal intensity with at least some segmentation boundaries constrained to temporally align with beats or tempo computationally extracted from the temporally-synchronized backing track (see Cohen paragraph [0026], “for dividing up and allocating segments of a particular song among various karaoke participants. A timeline 110 shows the beginning and ending times of each segment of the song. Such segments refer for example, to "verse 1", "chorus", "verse 2", "chorus 2" etc.”. In addition, in paragraph [0034], “provide the user with the root music source file, which contains the background music and displayable lyrics”. Fig. 1b). As for claim 23, Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses a similarity analysis computationally performed on the temporally-synchronized lyrics to classify particular portions of audiovisual performance encoding as verse or chorus (see Cohen paragraph [0026], “for dividing up and allocating segments of a particular song among various karaoke participants. A timeline 110 shows the beginning and ending times of each segment of the song. Such segments refer for example, to "verse 1", "chorus", "verse 2", "chorus 2" etc.”. Fig. 1b). As for claim 3, Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses segmenting the temporally-synchronized backing track to provide the computationally extracted audio feature (see Cohen paragraph [0035], “dividing up and allocating segments of a particular song among various karaoke participants”). As for claim 4, Rivera teaches audiovisual performance. Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses the applied visual effects augment the audiovisual performance with differing visual effects (see Na paragraph [0063], “the special effect adding module 220 can change at least one of a brightness, a color, and a chroma setting of the display”). Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses the extracted audio feature corresponds to one or more transitions (see Cohen, Fig. 1b shows audio feature corresponds to one or more transitions). Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses the different ones of the transitions (see Cohen, Fig. 1b shows different ones of the transitions). As for claim 29, Rivera discloses the temporally-synchronized score encodes musical sections of differing types (see paragraph [0069], “popularity scores associated with the songs, beat counts or known tempo data (e.g., retrieved from a metadata source including such information) saved in the database or otherwise known ahead of time”. In addition, in paragraph [0082], “The singer may be rated or scored based on quantitative measures such as, for example, synchronicity with the beat, deviations from the expected notes or pitches”). Rivera as modified by Na, Hung and Patten with the same motivation from claim 1 discloses wherein the applied visual effects include differing visual effects (see Na paragraph [0063], “the special effect adding module 220 can change at least one of a brightness, a color, and a chroma setting of the display”). Rivera as modified by Na, Hung, Patten and Cohen with the same motivation from claim 21 discloses encoded musical sections (see Cohen, Fig. 1b shows encoded musical sections). Response to Arguments Applicant’s arguments, filed 10/22/2025, regarding the amendments to the claims, have been fully considered but are not persuasive. The Applicant argues that the cited references fail to disclose a set of visual effects selectable, as a collective set of mappings, for respective segments. First, the applicant continually argues that the visual effects are selected “for each respective segment”. This is not presented in the present claims. The claims merely recite that the visual effects are selectable as a collective set for respective segments, not a collective set for each segment. Second, as outlined in the above rejection, Examiner still believes Hung teaches a set of visual effects selectable as a collective set of mappings. “Hung discloses generating a visualizing map (see paragraph [0020], “every segment could be allocated with some visualizing expression, and the visualizing map records the distribution...The visualizing map is constituted by the visualizing expressions allocated to all segments of music” (i.e. selectable as a collective set of mappings). A segment allocated with visualizing expressions recorded in the visualizing map reads on the visual effects schedule and set of selectable visual effects).” Third, the Examiner has provided an additional example of a collective set of mappings, as seen in the US patent application publication to Fleischhauer et al. (US 2013/0125000) (Fleischhauer, [0317], "When rendering the presentation, the media-editing application identifies the active video angle and uses the effects stack corresponding to this active angle". In addition, in paragraph [0382], "application of video or audio effects to an active angle (pixel modification effects, transforms, distortions, etc.), trim operations (ripple, roll, slip, slide), and compositing operations using multiple lanes (blending, picture in picture, etc.), among other operations"). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christina Schreiber whose telephone number is (571)272-4350. The examiner can normally be reached M-F 7-4 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached at 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTINA M SCHREIBER/Primary Examiner, Art Unit 2837 01/24/2026
Read full office action

Prosecution Timeline

Jun 03, 2021
Application Filed
Nov 17, 2023
Non-Final Rejection — §103, §DP
May 22, 2024
Response Filed
Jul 26, 2024
Final Rejection — §103, §DP
Jan 29, 2025
Request for Continued Examination
Jan 31, 2025
Response after Non-Final Action
Apr 17, 2025
Non-Final Rejection — §103, §DP
Oct 22, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586554
METHODS AND APPARATUS TO EXTRACT A PITCH-INDEPENDENT TIMBRE ATTRIBUTE FROM A MEDIA SIGNAL
2y 5m to grant Granted Mar 24, 2026
Patent 12580985
ELEVATOR SYSTEM WITH A MULTIPURPOSE EDGE-GATEWAY AND METHOD FOR DATA COMMUNICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12565401
ELEVATOR SWITCH MONITORING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12565402
MULTI-CAR ELEVATOR SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12567395
MUSIC GENERATION METHOD, MUSIC GENERATION APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
80%
Grant Probability
96%
With Interview (+16.3%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month