DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/20/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
Claims 1, 3-7, 9-13, 15-19 and 21 are pending in this application.
Claim rejections 35 USC 101 are withdrawn.
Applicant’s arguments on claim rejections 35 USC 102 and 35 USC 103, filed 1/16/2026, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of White.
Response to Arguments
Applicant’s arguments with respect to claims 1, 9 and 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-6, 9-11, 15-16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gross et al. (US 2016/0360382, hereinafter “Gross”) in view of White (US 2018/0246898).
Regarding claim 1, Gross teaches A search method ([0864] and Fig. 15A: discussing about a search user interface), comprising:
receiving a search operation for a first object, wherein the first object comprises a participating object of a first event ([0374]: discussing about conducting a new search on the electronic device for event information related to the portion of the textual content and preparing information that is based at least in part on at least one event, retrieved via the new search, for display as the predicted content item. [0864] and Fig. 15A: discussing about in FIG. 15A, the search user interface includes the search input portion 1120);
displaying an object card of the first object in a search result page in response to the search operation, wherein the object card comprises a first area and a second area, the first area is configured to play an object video of the first object ([0864] and Fig. 15A: discussing about the search user interface also includes the search results portion 1130, which is a map object that includes the vehicle location information at the geographic location identified by a dot and the location label “Infinite Loop 2” and the user's current location separately identified by a pin. [0869]: In some embodiments, the prompt from the virtual assistant instructs the user to take a photo of the vehicle at the geographic location and/or to take one or more photos/videos of the area surrounding the vehicle and displaying the user interface object includes displaying a selectable affordance (e.g., the affordance 1502, FIG. 15A-15B) that, when selected, causes the device to playback the recorded media.), and the second area is configured to display information of a sub-event, which the first object participates in, in the first event ([0871] and Fig. 15A: In some embodiments, the prompt is displayed on the display of the electronic device, receiving the information from the user includes receiving a textual description from the user that identifies the geographic location, and displaying the user interface object includes displaying the textual description from the user. In other embodiments, a selectable affordance is displayed that allows the user to access the textual description. For example, in response to a selection of the affordance 1535 (FIGS. 15A-15B), the device opens up a notes application that includes the textual description from the user.).
Gross does not explicitly teach wherein the first event comprises a competition event, and the participating object comprises a participating team or a participant of the competition event; wherein the information of the sub-event comprises schedule information; and in response to a first page display operation acting on information of a preset sub-event, displaying a sub-event details page of the first event and displaying detailed information of the information of the preset sub-event at a preset position of the sub-event details page, wherein the information of the preset sub-event comprises at least one selected from a group of information of an unstarted sub-event, information of an in-progress sub-event, and information of an ended sub-event.
White teaches wherein the first event comprises a competition event, and the participating object comprises a participating team or a participant of the competition event ([0100] and Fig. 4F: Accordingly, main window 430 displays a schedule of future contests or events, including event thumbnails 436a, 436b, and 436c. Event thumbnails 436, as shown here, comprise clickable links for purchasing tickets for a selected event. Event thumbnails 436 further comprise the date and time of the event, a description of the event (such as the two teams involved in the contest), and the address of the venue where the event will be held.);
wherein the information of the sub-event comprises schedule information ([0009]: In some cases, accessing a database of schedule information relating to athletic events to identify one or more athletic events occurring near the location may be based on one of a location of a venue where an athletic event is occurring, a media market for an athletic event, or a location based on a user preference or setting. [0100] and Fig. 4F: Accordingly, main window 430 displays a schedule of future contests or events, including event thumbnails 436a, 436b, and 436c. Event thumbnails 436, as shown here, comprise clickable links for purchasing tickets for a selected event. Event thumbnails 436 further comprise the date and time of the event, a description of the event (such as the two teams involved in the contest), and the address of the venue where the event will be held.); and
in response to a first page display operation acting on information of a preset sub-event, displaying a sub-event details page of the first event and displaying detailed information of the information of the preset sub-event at a preset position of the sub-event details page ([0180] and Fig. 12A: Athletic events window 1210 displays event thumbnails 1212a, 1212b, and 1212c. Each of the event thumbnails comprises icons 1214 representing the competitors of the athletic events. For example, event thumbnail 1212a comprises icons 1214a1 and 1214a2, representing the Indiana Pacers basketball team and the Boston Celtics basketball team, respectively. Likewise, event thumbnail 1212b comprises icons 1214b1 and 1214b2, representing the Atlanta Hawks basketball team and the Golden State Warriors basketball team, respectively. Finally, event thumbnail 1212c comprises icons 1214c1 and 1214c2, representing the Brooklyn Nets basketball team and the Dallas Mavericks basketball team, respectively.), wherein the information of the preset sub-event comprises at least one selected from a group of information of an unstarted sub-event ([0100] and Fig. 4F: Accordingly, main window 430 displays a schedule of future contests or events, including event thumbnails 436a, 436b, and 436c. Event thumbnails 436, as shown here, comprise clickable links for purchasing tickets for a selected event. Event thumbnails 436 further comprise the date and time of the event, a description of the event (such as the two teams involved in the contest), and the address of the venue where the event will be held.), information of an in-progress sub-event ([0178]: FIG. 12A illustrates an example of a portion of a graphical user interface of an athlete information discovery tool according to an embodiment of the disclosed subject matter. In FIG. 12A, a portion of a graphical user interface (GUI) 12 comprises an athletic events window 1210, labeled here “Playing Now” and an athletes window 1220, labeled here “Recent Plays.”), and information of an ended sub-event ([0099] and Fig. 4E: Accordingly, main window 430 displays video thumbnails 435 including, as shown here, video thumbnails 435a, 435b, and 435c. Each video thumbnail 435, as shown here, comprises a clickable link to a video file including a preview image, as well as a description of the video, an indicator of when the video was posted, and how many times the video has been viewed.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross with the athlete information discovery tool of White because it can provide a user, who may be a spectator of sporting events, an enhanced experience by enabling discovery of athlete information that is relevant to the user, where relevance may be inferred by factors relating to proximity in location and/or time, recent play activity, user preferences, and trending activity as has been described (White, [0205]).
Regarding claim 3, Gross in view of White teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: in response to a second page display operation acting on information of a first unstarted sub-event, displaying the sub-event details page of the first event and displaying detailed information of information of the first unstarted sub-event at a preset position of the sub-event details page (Gross, [0841]: In the search results portion 1130, the user interface for “Find My Car” application is displayed. The application uses location information of the user to display a pin on the map and shows the relative position of the user to the car indicated by the dot. In another example, based on a user's location and/or other information described above (e.g., usage data, textual content, and/or non-textual content etc.), an application displaying nearby points of interest is predicted to be the most of interest to the user. In FIG. 11I, the search results portion 1130 includes a point of interest, e.g., a restaurant within “Food” category named “Go Japanese Fusion”. The “Food” category is highlighted as indicated in double circle and the nearby restaurant “Go Japanese Fusion” is located based on the user's location information and the location of the restaurant. In another example, as shown in FIG. 11J, multiple points of interests within the “Food” category are predicted to be the most of interest to the user, and these points of interests, e.g., Caffe Macs, Out Steakhouse, and Chip Mexican Grill, within the food category are displayed and the “Food” category is highlighted.).
Regarding claim 4, Gross in view of White teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying the sub-event details pages of the first event in response to a third page display operation acting in the second area, wherein the third page display operation comprises a preset sliding operation (Gross, [0848]: In FIGS. 11I and 11J, each point of interest includes an affordance and includes a selectable description, which upon selected, provides more information about the point of interest, e.g., selecting the icon and or the description of the point of interest provides more description, pricing, menu, and/or distance information.).
Regarding claim 5, Gross in view of White teaches wherein the first object comprises the participant, and first object information of the first object is further displayed in the search result page; and/or the object card further comprises a third area, the third area is configured to display second object information of the first object, and the second object information is configured to trigger the displaying of detailed information of the first object (Gross, [0881]: Turning to FIG. 16B, in response to detecting the first input, the device enters (1618) the search mode. In some embodiments, entering the search mode includes, before receiving any user input at the search interface (e.g., no search terms have been entered and no input has been received at a search box within the search interface), presenting, via the display, an affordance that includes (i) the information about the at least one activity and (ii) an indication that the at least one activity has been identified as currently popular at the point of interest, e.g., popular menu items at a nearby restaurant (e.g., affordance 1715 in FIGS. 17C-17D), ride wait times at a nearby amusement park (e.g., affordance 1713 in FIGS. 17A-17B), current show times at a nearby movie theatre, etc.).
Regarding claim 6, Gross in view of White teaches wherein the first object comprises the participating team, and the object card further comprises a fourth area, wherein the fourth area is configured to display third object information of the first object, the third object information comprises information of the participant, and the information of the participant is configured to trigger execution of a search operation for a corresponding participant (Gross, [0801] and Fig. 9B: In some embodiments, the predictions portion is further populated (808) with at least one affordance for a predicted category of nearby places (e.g., suggested places 960 section, FIG. 9B), and the predicted category of places (e.g., nearby places) is automatically selected based at least in part on one or more of: the current time and location data corresponding to the device.).
Claim 9 is rejected under the same rationale as claim 1. Gross also teaches An electronic device, comprising: at least one processor; and a memory in communication connection with the at least one processor, wherein the memory is configured to store a computer program being executed by the at least one processor, and the computer program, when executed by the at least one processor, enables the at least one processor to execute a search method ([0007]: discussing about the device has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions; [0864] and Fig. 15A: discussing about a search user interface).
Claim 10 is rejected under the same rationale as claim 1. Gross also teaches A computer readable storage medium, wherein the computer readable storage medium is configured to store a computer instruction, and the computer instruction, when executed by a processor, implements a search method ([0007]: discussing about the device has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions; [0864] and Fig. 15A: discussing about a search user interface).
Regarding claim 11, Gross in view of White teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying information of a second unstarted sub-event as an appointed state in response to an appointment operation for the information of the second unstarted sub-event (Gross, [1120]: In some implementations, application manager 31_106 can request an application invocation forecast from sampling daemon 31_102. For example, sampling daemon 31_102 can provide an interface that allows the application manager 31_106 to request temporal forecast of application launches (e.g., “bundleId” start events) on mobile device 31_100. Sampling daemon 31_102 can receive events (e.g., “bundleId” start events) that indicate when the user has invoked applications on the mobile device 31_100, as described above. When application manager 31_106 requests a temporal forecast for the “bundleId” attribute, sampling daemon 31_102 can analyze the “bundleId” events stored in event data store 31_104 to determine when during the day (e.g., in which 15-minute timeslot) applications are typically invoked by the user. For example, sampling daemon 31_102 can calculate a probability that a particular time of day or time period will include an application invocation by a user using the temporal forecasting mechanism described above. [1121]: While application manager 31_106 is initializing, application manager 31_106 can request a temporal forecast of application invocations (e.g., “bundleId” start events) for the next 24 hours.).
Regarding claim 15, Gross in view of White teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying the sub-event details pages of the first event in response to a third page display operation acting in the second area, wherein the third page display operation comprises a preset sliding operation (Gross, [0848]: In FIGS. 11I and 11J, each point of interest includes an affordance and includes a selectable description, which upon selected, provides more information about the point of interest, e.g., selecting the icon and or the description of the point of interest provides more description, pricing, menu, and/or distance information.).
Regarding claim 16, Gross in view of White teaches wherein the object card further comprises a third area, the third area is configured to display second object information of the first object, and the second object information is configured to trigger the displaying of detailed information of the first object (Gross, [0881]: Turning to FIG. 16B, in response to detecting the first input, the device enters (1618) the search mode. In some close c embodiments, entering the search mode includes, before receiving any user input at the search interface (e.g., no search terms have been entered and no input has been received at a search box within the search interface), presenting, via the display, an affordance that includes (i) the information about the at least one activity and (ii) an indication that the at least one activity has been identified as currently popular at the point of interest, e.g., popular menu items at a nearby restaurant (e.g., affordance 1715 in FIGS. 17C-17D), ride wait times at a nearby amusement park (e.g., affordance 1713 in FIGS. 17A-17B), current show times at a nearby movie theatre, etc.).
Regarding claim 18, Gross in view of White teaches wherein the object card further comprises a fourth area, wherein the fourth area is configured to display third object information of the first object, the third object information comprises information of the participant, and the information of the participant is configured to trigger execution of a search operation for a corresponding participant (Gross, [0801] and Fig. 9B: In some embodiments, the predictions portion is further populated (808) with at least one affordance for a predicted category of nearby places (e.g., suggested places 960 section, FIG. 9B), and the predicted category of places (e.g., nearby places) is automatically selected based at least in part on one or more of: the current time and location data corresponding to the device.).
Claims 7 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Gross in view of White and further in view of Goel et al. (US 2016/0173644, hereinafter “Goel”).
Regarding claim 7, Gross in view of White teaches the method of claim 1 as discussed above. Gross in view of White does not explicitly teach displaying the search result page as a first page background in response to the search operation, wherein the first page background is associated with a team identification color and/or a national identification color corresponding to the first object.
Goel teaches displaying the search result page as a first page background in response to the search operation, wherein the first page background is associated with a team identification color and/or a national identification color corresponding to the first object ([0066]: In the structured information page 420, the cast region 416 has the primary color as the background color, and the crew region 426 has the secondary color as the background color.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross and White with the teaching about the background color of Goel because it would provide an effective way to present to users entity information that is organized and which includes information that users are more likely to consider important (Goel, [0014]).
Claim 21 is rejected under the same rationale as claim 7.
Claims 12-13 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gross in view of White and further in view of Einaudi et al. (US 2020/0107077, hereinafter “Einaudi”).
Regarding claim 12, Gross in view of White teaches the method of claim 1 as discussed above. Gross in view of White does not explicitly teach wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying a live streaming page in response to a trigger operation for information of a first in-progress sub-event, wherein the live streaming page is configured to display a live streaming image of a first sub-event, and the first sub-event is a sub-event corresponding to the information of the first in-progress sub-event.
Einaudi teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying a live streaming page in response to a trigger operation for information of a first in-progress sub-event, wherein the live streaming page is configured to display a live streaming image of a first sub-event, and the first sub-event is a sub-event corresponding to the information of the first in-progress sub-event ([0084] and Fig. 6: In step 506, in a second portion of the UI a list of content sources over which the search for content was performed is provided. For instance, each of UI 600, UI 700, UI 800, and UI 900 show a listing of disparate content sources. For UI 600, a listing 614 is shown in a second portion 612. Listing 614 includes different content sources including, without limitation, Top Results, Live (e.g., live TV), Upcoming (e.g., future Live TV content), DVR (e.g., recorded content from a programming provider), Streaming (e.g., content streamed from applications/services such as Netflix®, Hulu®, etc.), Plex (e.g., content stored on a Plex Server®), Web (e.g., YouTube® content, podcasts from iHeartRadio®, etc.), Audio (e.g., iTunes® content)), User Devices, and/or the like. [0094]: Similarly, UI 700 of FIG. 7 shows that in a third portion 702 of UI 700, ranked results for “Warriors Basketball” are listed for different ones of the content sources/categories in listing 714 as Top Results. Third portion 702 may include one or more sub-portions in which additional individual search results are provided. The top ranked search result is for Live TV and is shown first, followed by a search result from Streaming Results, and then a search result from Web Results.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross and White with the teaching about different live content sources of Einaudi because it would improve the user and UI experience through the described platform by providing blended search results in a single UI that is formatted to display content search results in a manner by which the disparate content sources are accounted for while the search results are presented and organized by rank and source (Einaudi, [0076]).
Regarding claim 13, Gross in view of White teaches the method of claim 1 as discussed above. Gross in view of White does not explicitly teach wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying a video playback page in response to a trigger operation for information of a first ended sub-event, wherein the video playback page is configured to display video highlights of a second sub-event, and/or a live streaming replay video, and the second sub-event is a sub- event corresponding to the information of the first ended sub-event.
Einaudi teaches wherein after the displaying the object card of the first object in the search result page, the method further comprises: displaying a video playback page in response to a trigger operation for information of a first ended sub-event, wherein the video playback page is configured to display video highlights of a second sub-event, and/or a live streaming replay video, and the second sub-event is a sub- event corresponding to the information of the first ended sub-event ([0105] and Fig. 6: In step 510, in the third portion of the UI one or more user-selectable options are provided that are configured to initiate an action associated with an associated result of the one or more of the ranked results. For instance, UI 600 of FIG. 6 shows a selectable object 616 (“Watch on Channel 140”) in third portion 602 of UI 600 which, when selected, initiates actions of switch 404 to allow the user to watch content associated with the “Seahawks vs. 49ers” search result for the blended search on “Football.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross and White with the teaching about different live content sources of Einaudi because it would improve the user and UI experience through the described platform by providing blended search results in a single UI that is formatted to display content search results in a manner by which the disparate content sources are accounted for while the search results are presented and organized by rank and source (Einaudi, [0076]).
Regarding claim 17, Gross in view of White teaches the method of claim 16 as discussed above. Gross also teaches the first object comprises the information of the sub- event in which the participant participates is displayed in the second area, and the second object information of the participant is displayed in the third area ([0881]: Turning to FIG. 16B, in response to detecting the first input, the device enters (1618) the search mode. In some close c embodiments, entering the search mode includes, before receiving any user input at the search interface (e.g., no search terms have been entered and no input has been received at a search box within the search interface), presenting, via the display, an affordance that includes (i) the information about the at least one activity and (ii) an indication that the at least one activity has been identified as currently popular at the point of interest, e.g., popular menu items at a nearby restaurant (e.g., affordance 1715 in FIGS. 17C-17D), ride wait times at a nearby amusement park (e.g., affordance 1713 in FIGS. 17A-17B), current show times at a nearby movie theatre, etc.).
Gross in view of White does not explicitly teach the first object comprises a participant, a live streaming image of the participant is displayed in the first area.
Einaudi teaches the first object comprises a participant, a live streaming image of the participant is displayed in the first area ([0105] and Fig. 6: In step 510, in the third portion of the UI one or more user-selectable options are provided that are configured to initiate an action associated with an associated result of the one or more of the ranked results. For instance, UI 600 of FIG. 6 shows a selectable object 616 (“Watch on Channel 140”) in third portion 602 of UI 600 which, when selected, initiates actions of switch 404 to allow the user to watch content associated with the “Seahawks vs. 49ers” search result for the blended search on “Football.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross and White with the teaching about different live content sources of Einaudi because it would improve the user and UI experience through the described platform by providing blended search results in a single UI that is formatted to display content search results in a manner by which the disparate content sources are accounted for while the search results are presented and organized by rank and source (Einaudi, [0076]).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gross in view of White and further in view of Mathur et al. (US 2021/0037168, hereinafter “Mathur”).
Regarding claim 19, Gross in view of White teaches the method of claim 18 as discussed above. Gross in view of White does not explicitly teach wherein the first object comprises participating team, an associated video of the participating team is displayed in the first area, the information of the sub-event, which the participating team participates in, in the first event is displayed in the second area, and the third object information of the participating team is displayed in the fourth area.
Mathur teaches wherein the first object comprises participating team, an associated video of the participating team is displayed in the first area, the information of the sub-event, which the participating team participates in, in the first event is displayed in the second area, and the third object information of the participating team is displayed in the fourth area (Fig. 32 and [0158], [0168]: discussing about a graphical user interface (GUI) 3001 comprises multiple virtual cameras (VCAMs) sections, one group of cameras shows Stadium VCAMs 2511, and a Go-After-Player VCAMs 2510 which follow the indicated players on the court, ice, or field).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the searching method of Gross and White with the teaching about multiple virtual cameras of Mathur because it would provide users with the highest quality virtual camera views greatly enhances the user experience (Mathur, [0149]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hampson et al. (US 2017/0257337) discloses that content provided by content server 102 can be any suitable content, such as video content, audio content, television programs, movies, cartoons, sound effects, audiobooks, web pages, news articles, streaming live content (e.g., a streaming radio show, a live concert, and/or any other suitable type of streaming live content), electronic books, search results and/or any other suitable type of content.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHONG H NGUYEN whose telephone number is (571)270-1766. The examiner can normally be reached Monday-Friday, 8:30am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at (571) 272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHONG H NGUYEN/ Primary Examiner, Art Unit 2156
March 11, 2026