Prosecution Insights
Last updated: April 17, 2026
Application No. 18/982,593

SYSTEMS AND METHODS FOR PROVIDING MEDIA CONTENT FOR AN EXHIBIT OR DISPLAY

Final Rejection §103
Filed
Dec 16, 2024
Examiner
CASTRO, ALFONSO
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
unknown
OA Round
4 (Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
3y 8m
To Grant
69%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
218 granted / 435 resolved
-7.9% vs TC avg
Strong +19% interview lift
Without
With
+18.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
38 currently pending
Career history
473
Total Applications
across all art units

Statute-Specific Performance

§101
6.5%
-33.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
4.0%
-36.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 435 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see pg. 9, filed 12/10/2025, with respect to the status of the claims is hereby acknowledged. Applicant’s arguments, see pg. 9-10, filed 12/10/2025, with respect to the rejection of claim(s) 21-40 under 35 U.S.C. 103 have been fully considered. The examiner notes that the applicant’s arguments are directed to the newly amended limitations not previously recited, therefore, upon further consideration, a new ground(s) of rejection is made with newly found prior art in order to address the newly amended limitations. With respect to the amended claim limitations recited in the independent claims, the examiner will rely in part on the prior art of record and newly found prior art. For example, the claims are amended to recite “wherein the artificial intelligence algorithm includes an artificial neural network algorithm configured to take at least a portion of the determined demographic information for the audience member, including an age of the audience member, and identify media content that will appeal to the audience member using the age of the audience member,” Fan teaches a UE user employing a UE (e.g., 202, 204, 206), and associated available LPPMC 214, LSPMC 216, or LPC 218 can specify desired user preferences, selections, or requests (e.g., certain programs or exhibits the UE user desires to see during the tour; types of subject matter in which the UE user is interested (Fan para 0128-129 time constraints and tour itinerary guides the user to different exhibits at a particular venue). See the teachings of Krasadakis 3C para 73-75 disclosing identifying the location of a viewer using positioning information or detection information which are analogous art). In particular, Fan teaches: [0128] In accordance with yet another aspect, the PMC 212 can customize the information presentation (e.g., tour presentation) that the UE user experiences using the UE (e.g., 202, 204, 206), based at least in part on current location of the UE user and associated UE, preferences specified by the UE user, time constraints of the UE user (e.g., the amount of time the UE user desires to spend taking part in the information presentation), current state of the facility, programs, or exhibits (e.g., number of persons occupying respective areas of the facility, respectively how busy respective areas of the facility are, what exhibits or programs are currently available for access and what programs are not currently available for access, what exhibits or programs will be available for access during the period of time specified by the UE user, and/or other desired factors. Fan further teaches content provided to user is based on the user preferences comprising historical/history data (Fan para 143, 154-156). Fan para 170-171 and para 216-271 teaches utilizing artificial and neural networks: [0216] In accordance with another embodiment of the disclosed subject matter, one or more components (e.g., UE, PMC, LPPMC, LSPMC, LPC, etc.) in the communication network environment can utilize artificial intelligence (AI) techniques or methods to infer (e.g., reason and draw a conclusion based at least in part on a set of metrics, arguments, or known outcomes in controlled scenarios) an automated response to perform in response to an inference(s); a current or future state of conditions relating to an exhibit, program, lecture, area (e.g., room) of a facility, etc., associated with an information presentation; a language translation of a word or phrase; a recognition of a spoken word or phrase during a voice-to-text translation of the word or phrase; identifying a current location or an expected future location of a UE; a recommendation or a targeted advertisement that is or may be of interest to a UE user; a customized tour itinerary, a customized tour route; etc. Artificial intelligence techniques typically can apply advanced mathematical algorithms--e.g., decision trees, neural networks, regression analysis, principal component analysis (PCA) for feature and pattern extraction, cluster analysis, genetic algorithm, and reinforced learning--to historic and/or current data associated with the systems and methods disclosed herein to facilitate rendering an inference(s) related to the systems and methods disclosed herein. Whereas Fan does not use the term “demographics”, as discussed above, Fan does teach a person’s preferences is utilized to determine what content will be presented to the viewer. See also the teachings of Krasadakis above disclosing determining the demographics of a person in conjunction with making content recommendations for presentation (para 20-21). In an analogous art, Horvitz teaches utilizing demographics of a person in order to make content recommendations wherein the database can be mined for other similar users who visited that location and interacted in a certain way such that demographics can be employed to facilitate what data will be presented and also what new data will be presented (see Horvitz para 34-40, 77). In an analogous art, Tesch teaches utilizing artificial intelligence comprising neural networks in order to generate content based on a set of user defined preferences/classifications (para 288) and further teaches user define preferences/classifications comprises age of the viewer (para 81, 93, 114, 119, 129, 139, 311). Furthermore, newly found prior art, Sadowsky discloses a motivation for utilizing artificial intelligence for presenting audio-visual content to an audience comprising utilizing said artificial intelligence for detecting a presence of an audience member using facial recognition and further identify the audience member and identify the audience member determined demographic information including the an age of the audience member in order to identify and modify the presentation of media content with content that will appeal to the audience member (Sadowsky para 109-119, 124-127, 133-137). All things considered, the examiner will set forth a new obviousness grounds of rejection. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 21-40 are rejected under 35 U.S.C. 103 as being unpatentable over Krasadakis; Georgios et al. US 20170289596 A1 (hereafter Krasadakis) and in further view of Shimy; Camron et al. US 20110072452 A1 (hereafter Shimy) and in further view of Fan; James et al. US 20120102409 A1 (hereafter Fan) and in further view of Horvitz; Eric J. et al. US 20070005419 A1 (hereafter Horvitz) and in further view of Tesch; Mans Anders et al. US 20140164507 A1 (hereafter Tesch) and in further view of Sadowsky; Richard Scott et al. US 20170095192 A1 (hereafter Sadowsky). Regarding claim 21, “an audio-visual display system for presenting audio-visual media content to an audience, the system comprising: a recognition module configured to detect a presence of an audience member in a venue and identify the audience member; a media content playback device operable to present audio-visual media content to the audience member; a content control module configured to control presentation of the audio-visual media content by the media content playback device, wherein the content control module is configured to execute an artificial intelligence algorithm; a processor in communication with the recognition module, the media content playback device, and the content control module; and a memory in communication with the processor and having stored therein instructions that, when read by the processor, direct the processor to: receive the identity of the audience member from the recognition module; receive demographic information for the audience member” Krasadakis para 14 teaches the displays are controlled by computing devices with cameras for capturing images or videos of people within the viewing areas of the displays. The captured images or video may be interpreted by recognition software to personally identify the users--or at least recognize various features about them. Content is selected for display on the public viewing screens based on the users currently within the viewing areas. For example, a screen in an airport may be equipped with a camera that captures facial images of users walking by. These facial images may be transmitted to a backend server that may select particular content--e.g., news stories--to present on the display based on the identified users. Krasadakis using image capture devices to identify a viewer in proximity to a media device to obtain the viewer profile and deliver tailored content to the media device in proximity to the viewer based on a time threshold that the viewer will spend at a particular location (i.e., the amount of time the viewer will spend walking past a display at a public venue)(See Krasadakis para 14-17, 20-24). In particular, Krasadakis para 20-21, 64, 83 teaches receiving demographic information of the identified persons wherein the person is individually identified, through facial recognition, body recognition, device communication (e.g., wireless communication with the user's smart phone or wearable device), speech recognition, user profile identifier, social media handle, user name, speech, biometrics (e.g., eye pattern, fingerprints, etc.), or any other technique for personally identifying the user. For example, a camera coupled to a display device may capture video of a person, and that video may be analyzed—either locally or via a web service—to recognize the person Krasadakis para 55, 58 teaches display device 100 may capture interaction with passing-by users 102 through multiple channels; utilizing a plurality of media devices to promote a particular targeted exhibit. For example, smart phones, tablets, or wearable devices of the user 102 may submit signals via a pre-installed application. The microphone 114, camera 116, and sensors 118 may capture information about the user 102 or the environment. See Krasadakis Fig. 1-2 and para 62-65 comprising elements 100 and 204 comprising a display device, I/O components and memory communicatively coupled with elements 104 comprising a processor and elements 118, 116, 120 126, 128, 214, 216 relatively correspond to sensors, camera, presentation device, user recognizer, content retriever, user identifier and content selector which correspond to the claimed “a processor in communication with the recognition module, the media content playback device, and the content control module; and a memory in communication with the processor and having stored therein instructions that, when read by the processor, direct the processor to;” Regarding the claimed “modules,” see Krasadakis [0090] a camera for capturing an image of the user while in the area; [0091] for executing a facial recognition module to recognize a face of the at least one user; [0092] recognizing two or more users in the area, and the processor is programmed to select the content for presentation based on the common characteristics of the two or more users in the area; see also para 116 disclosing aspects of the disclosure may be implemented with any number and organization of such components or modules; aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Whereas Krasadakis teaches receive the identity of the audience member from the recognition module; receive demographic information for the audience member, Krasadakis does not explicitly use the terms artificial intelligence algorithm (i.e., wherein the demographic information is determined by the artificial intelligence algorithm), however, Krasadakis para 20-21 does disclose capture video of a person, and that video may be analyzed—either locally or via a web service—to recognize the demographics of a person and para 33 teaches the invention incorporates a neural network, understood as utilizing artificial intelligence for communicating transfer data between servers or display devices. As such, Krasadakis para 20-21 does disclose capture video of a person, and that video may be analyzed—either locally or via a web service—to recognize the demographics of a person and para 33 teaches the invention incorporates a neural network, understood as utilizing artificial intelligence for communicating transfer data between servers or display devices. First, with respect to “determine, by the artificial intelligence algorithm executed by the content control module, media content preferences for the audience member, wherein the media content preferences for the audience member include a preference for a scientific aspect of an exhibit in the venue or a historical aspect of the exhibit in the venue, wherein the artificial intelligence algorithm includes an artificial neural network algorithm configured to take at least a portion of the determined demographic information for the audience member, including an age of the audience member, and identify media content that will appeal to the audience member using the age of the audience member; determine, by the artificial intelligence algorithm executed by the content control module, media content for the audience member, wherein determining the media content for the audience member includes considering the determined media content preferences for the audience member, and the demographic information determined by the artificial intelligence algorithm for the audience member; and operate the media content playback device to present the media content for the audience member to the audience member, wherein the presented media content includes audio-visual medial content,” as discussed above, does not explicitly use the terms artificial intelligence algorithm (i.e., in haec verba), however, Krasadakis [0066] teaches content selector 216 selects media content from database 210 that is likely to be engaged based on the user profile data of the user 102 or the history of other user interactions with characteristics in common with the user 102. Furthermore, as discussed above, wherein Krasadakis teaches a user recognizer 126, content retriever 128, user identifier 214 and content selector 216 to utilize facial recognition to identify a user and obtain the user profile from the identifying information Krasadakis does not teach that the content control module performs both functions. Stated differently, Krasadakis teaches identifying the viewer and then obtaining the user profile wherein the content selector then utilizes the obtained audience member information for each of the audience members for whom identifier information has been received. Furthermore, wherein Krasadakis in essence teaches all the limitations except the terms “artificial intelligence” as discussed above (i.e., in haec verba), Krasadakis still teaches essentially all the limitations wherein para 0063-0065 the user is detected at display device 100a and identifying information about the user and/or the client device 206 is transmitted to the application server 204. The user identifier 214 may query database cluster 208 with any or all of the received identifying information about the user 102 to obtain a user profile and history of content that has been presented to the user. Alternatively or additionally, the user identifier 214 may access an in-memory or accessible cache to retrieve the user profile and history of content that has been presented. Krasadakis para 52, 56, 64, 72 obtaining user preferences. See also as discussed above, Krasadakis para 14 teaches the displays are controlled by computing devices with cameras for capturing images or videos of people within the viewing areas of the displays. The captured images or video may be interpreted by recognition software to personally identify the users--or at least recognize various features about them. Content is selected for display on the public viewing screens based on the users currently within the viewing areas. For example, a screen in an airport may be equipped with a camera that captures facial images of users walking by. These facial images may be transmitted to a backend server that may select particular content--e.g., news stories--to present on the display based on the identified users. Krasadakis using image capture devices to identify a viewer in proximity to a media device (i.e., exhibit) to obtain the viewer profile and deliver tailored content to the media device in proximity to the viewer based on a time threshold that the viewer will spend at a particular location (i.e., the amount of time the viewer will spend walking past a display at a public venue)(See Krasadakis para 14-17, 20-24). See also para 20-21, 24, 42-43 disclosing the display devices are equipped with cameras or other sensors that identify the demographics of people within a first public viewing area (e.g., a particular portion of an airport) of a first public display device at an initial time. A person may be “personally” recognized, meaning the person is individually identified, through facial recognition, user profile identifier. For example, a camera coupled to a display device may capture video of a person, and that video may be analyzed—either locally or via a web service—to recognize the person. Wherein para [0042-0043] The user recognizer 126 may alternatively or additionally provide the captured image, video, audio, or sensor data over the network 106 to a web service that may then identify the users 102 through comparison against a database of subscribed users—either personally based on stored characteristics and user profiles or partially based on image, video, audio, or sensor data analysis. See also para 44-45, 48 recognizing a group of users wherein the content retriever 128 includes instructions for retrieving presentable content for the recognized user 102. Content may include any audio, video, image, web content, actionable insignia, or other data that can be visibly, audibly, or electronically provided to the user 102 and users 102 may be recognized by the user recognizer 126 either individually or as part of a group of people within the viewing area. Because the public devices 100 are located in public areas, it is often the case that numerous users 102 are within the viewing areas at any given time. Examples may select content—either on the display device 100 or by a web service—to present to a group of users based on the user profile of one of the users 102 in the group, a collection of users 102, or all the users 102 (corresponds to all the limitations, except the explicit recitation of the term “artificial intelligence” with respect to “determine, by the artificial intelligence algorithm executed by the content control module, media content preferences for the audience member, wherein the media content preferences for the audience member include a preference for a scientific aspect of an exhibit in the venue or a historical aspect of the exhibit in the venue, wherein the artificial intelligence algorithm includes an artificial neural network algorithm configured to take at least a portion of the determined demographic information for the audience member, including an age of the audience member, and identify media content that will appeal to the audience member using the age of the audience member; determine, by the artificial intelligence algorithm executed by the content control module, media content for the audience member, wherein determining the media content for the audience member includes considering the determined media content preferences for the audience member, and the demographic information determined by the artificial intelligence algorithm for the audience member; and operate the media content playback device to present the media content for the audience member to the audience member, wherein the presented media content includes audio-visual medial content”). In an analogous art, Shimy teaches para 49-54 processor and memory of a system for detecting and/or identifying user or users of a media device comprising image capture devices corresponds to audience identifiers corresponding to identified viewers and associated profiles; para 37, 71-72 – media presentation device stores media content locally and capable of obtaining content from remote source 416; para 45 – all or part of user profiles are obtained from remote sources; para 91 recommending content based on detected users and para 43-45 user’s preferences are associated with user identification; See also Shimy para 50-54 – users are identified based, in part, on identification of media device and associated identifiers to use user profiles and preferences); para 54-56, 82 identified user associated with a user profile stored on memory of a set top box in order to presented tailored content; para 56 tuning and decoding circuitry are separate from the detecting circuitry 307 such that an image capture device for a detecting module is separate from a module for accessing user profile information to select content to present a tailored presentation; Shimy further teaches presented media content based on combination of user settings (Shimy para 0005, 0082, 119). Shimy further teaches a personal device as a mobile device recognized within the proximity of a set-top box and then the user associated with the mobile device is identified, from a local or remote storage, as being the current audience information to select displayable content (para 43, 51-54, 58-59, 104). Whereas Shimy’s teachings would be understood by one of ordinary skill in the art to disclose the claimed limitation regarding “receiving an image” without referencing the term “image.” For example, Shimy does disclose a camera detects visual information which a person of ordinary skill in the art would interpret as comprising images because Shimy states the camera may be capable of capturing information within the visual spectrum and/or outside the visual spectrum in para 52. Shimy teaches that based on the viewers that are detected in proximity to a television, the invention uses viewer profiles in order to determine the information that is displayed on the television. For example, Shimy’s invention discloses para 124-125 - if multiple users are active at a device include options to provide the content associated on a particular device (e.g., the user's mobile device), and/or any other suitable device. Regarding wherein the artificial intelligence algorithm includes an artificial neural network algorithm configured to take at least a portion of the determined demographic information for the audience member, including an age of the audience member, and identify media content that will appeal to the audience member using the age of the audience member; determine, by the artificial intelligence algorithm executed by the content control module, media content for the audience member, wherein determining the media content for the audience member includes considering the determined media content preferences for the audience member, and the demographic information determined by the artificial intelligence algorithm for the audience member; and operate the media content playback device to present the media content for the audience member to the audience member Shimy para 130 teaches only presenting content that will appeal to a viewer of a particular age by changing the displayed movie. A person of ordinary skill in the art would have appreciated an embodiment wherein a content control module is able to provide the audience member information for each of the audience members for whom identifier information has been received in order to provide tailored content for viewers detected in the vicinity of a content presentation device. Whereas Shimy and Krasadakis both utilized user/viewer preferences for determining appropriate content for presentation, Shimy and Krasadakis do not use the terms “a preference for a scientific aspect” or a “historical aspect.” Shimy, however, does teach in paragraph 0126 that “in some embodiments, users may set manually or media devices may determine automatically preferences associated with particular actors and/or actresses, genres, program types, and the current mood of the user or users, or any other suitable preference and/or aspect of the users' profiles. Various systems and methods for determining users' preferences for media content are discussed in, for example, Yates, U.S. patent application Ser. No. 11/324,202, filed Dec. 29, 2005, which is hereby incorporated by reference herein in its entirety.” Fan teaches a UE user employing a UE (e.g., 202, 204, 206), and associated available LPPMC 214, LSPMC 216, or LPC 218 can specify desired user preferences, selections, or requests (e.g., certain programs or exhibits the UE user desires to see during the tour; types of subject matter in which the UE user is interested (Fan para 0128-129 time constraints and tour itinerary guides the user to different exhibits at a particular venue). See the teachings of Krasadakis 3C para 73-75 disclosing identifying the location of a viewer using positioning information or detection information which are analogous art). In particular, Fan teaches: [0128] In accordance with yet another aspect, the PMC 212 can customize the information presentation (e.g., tour presentation) that the UE user experiences using the UE (e.g., 202, 204, 206), based at least in part on current location of the UE user and associated UE, preferences specified by the UE user, time constraints of the UE user (e.g., the amount of time the UE user desires to spend taking part in the information presentation), current state of the facility, programs, or exhibits (e.g., number of persons occupying respective areas of the facility, respectively how busy respective areas of the facility are, what exhibits or programs are currently available for access and what programs are not currently available for access, what exhibits or programs will be available for access during the period of time specified by the UE user, and/or other desired factors. Fan further teaches content provided to user is based on the user preferences comprising historical/history data (Fan para 143, 154-156). Fan para 170-171 and para 216-271 teaches utilizing artificial and neural networks: [0216] In accordance with another embodiment of the disclosed subject matter, one or more components (e.g., UE, PMC, LPPMC, LSPMC, LPC, etc.) in the communication network environment can utilize artificial intelligence (AI) techniques or methods to infer (e.g., reason and draw a conclusion based at least in part on a set of metrics, arguments, or known outcomes in controlled scenarios) an automated response to perform in response to an inference(s); a current or future state of conditions relating to an exhibit, program, lecture, area (e.g., room) of a facility, etc., associated with an information presentation; a language translation of a word or phrase; a recognition of a spoken word or phrase during a voice-to-text translation of the word or phrase; identifying a current location or an expected future location of a UE; a recommendation or a targeted advertisement that is or may be of interest to a UE user; a customized tour itinerary, a customized tour route; etc. Artificial intelligence techniques typically can apply advanced mathematical algorithms--e.g., decision trees, neural networks, regression analysis, principal component analysis (PCA) for feature and pattern extraction, cluster analysis, genetic algorithm, and reinforced learning--to historic and/or current data associated with the systems and methods disclosed herein to facilitate rendering an inference(s) related to the systems and methods disclosed herein. Whereas Fan does not use the term “demographics”, as discussed above, Fan does teach a person’s preferences is utilized to determine what content will be presented to the viewer. See also the teachings of Krasadakis above disclosing determining the demographics of a person in conjunction with making content recommendations for presentation (para 20-21). In an analogous art, Horvitz teaches utilizing demographics of a person in order to make content recommendations wherein the database can be mined for other similar users who visited that location and interacted in a certain way such that demographics can be employed to facilitate what data will be presented and also what new data will be presented (see Horvitz para 34-40, 77). In an analogous art, Tesch teaches utilizing artificial intelligence comprising neural networks in order to generate content based on a set of user defined preferences/classifications (para 288) and further teaches user define preferences/classifications comprises age of the viewer (para 81, 93, 114, 119, 129, 139, 311). See Prior art made of record but not relied upon to avoid duplicative references - Shoemake; Matthew B. et al. US 20150070516 A1 disclosing providing media content targeted to an identified age of an audience member (para 20, 28, 31, 87-88, 92, 158-161). In an analogous art, Sadowsky discloses a motivation for utilizing artificial intelligence for presenting audio-visual content to an audience comprising utilizing said artificial intelligence for detecting a presence of an audience member using facial recognition and further identify the audience member and identify the audience member determined demographic information including the an age of the audience member in order to identify and modify the presentation of media content with content that will appeal to the audience member (Sadowsky para 109-119, 124-127, 133-137). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Krasadakis’ invention for using image capture devices to identify a viewer in proximity to a media device (i.e., exhibit) to obtain the viewer profile and deliver tailored content to the media device in proximity to the viewer based on a time threshold that the viewer will spend at a particular location (i.e., the amount of time the viewer will spend walking past a display at a public venue) by further incorporating known elements Shimy’s invention for detecting viewers in proximity to a media device and tracking user movement to determine a dwell time requirement to spend at a location to determine whether tailored content will be presented at the media device in close proximity to the viewer in order to present tailored content at a particular venue based on the navigation of a moving viewer. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Krasadakis’ invention for using image capture devices to identify a viewer in proximity to a media device (i.e., exhibit) to obtain the viewer profile and deliver tailored content to the media device in proximity to the viewer based on a time threshold that the viewer will spend at a particular location (i.e., the amount of time the viewer will spend walking past a display at a public venue) by further incorporating known elements Shimy’s invention for detecting viewers in proximity to a media device and tracking user movement to determine a dwell time requirement to spend at a location to determine whether tailored content will be presented at the media device in close proximity to the viewer in order to present tailored content at a particular venue based on the navigation of a moving viewer and by further incorporating known elements of Fan which recognizes the benefit of utilizing artificial intelligence for identifying a viewer in proximity to an exhibit in order to present adaptive playback of media content in a guided tour scenario at each particular exhibit location of the viewers travel path and be able to tailor the content based on user preferences because the prior art to Horvitz recognizes the benefit of utilizing artificial intelligence for analyzing demographics when recommending exhibits at a museum and accompanying content and the prior art to Tesch teaches utilizing artificial intelligence comprising neural networks in order to generate content based on a set of user defined preferences/classifications and further teaches user defined preferences/classifications comprises age of the viewer because the prior art to Sadowsky recognizes the benefits of utilizing artificial intelligence for presenting audio-visual content to an audience comprising utilizing said artificial intelligence for detecting a presence of an audience member using facial recognition and further identify the audience member and identify the audience member determined demographic information including the an age of the audience member in order to identify and modify the presentation of media content with content that will appeal to the audience member. Regarding claim 22, “wherein the stored computer instructions are further configured to instruct the processor to determine a time allocation constraint for the audience member” is further rejected on obviousness grounds as discussed in the rejection of claim 21 wherein Shimy and Krasadakis disclose identifying audience members within proximity to exhibit control devices having media content storage in order to present tailored content at a particular location based on time allocation requirement to spend at a location, the prior art to Fan teaches the user can specify time constraints (e.g., the amount of time the user desires to spend taking part in the information presentation relating to an exhibit at a particular venue such as a facility)(Fan para 0128-129 time constraints and tour itinerary guides the user to different exhibits at a particular venue). Regarding claim 23, “wherein the stored computer instructions are further configured to instruct the processor to determine the media content for the audience member based on the determined time allocation constraint for the audience member” is further rejected on obviousness grounds as discussed in the rejection of claims 21-22 wherein Shimy and Krasadakis disclose identifying audience members within proximity to exhibit control devices having media content storage in order to present tailored content at a particular location based on time allocation requirement to spend at a location, the prior art to Fan teaches the user can specify time constraints (e.g., the amount of time the user desires to spend taking part in the information presentation relating to an exhibit at a particular venue such as a facility)(Fan para 0128-129 time constraints and tour itinerary guides the user to different exhibits at a particular venue). Regarding claim 24, “further including a proximity sensor configured to detect a location of the audience member in the venue” is further rejected on obviousness grounds as discussed in the rejection of claims 21-23 wherein Krasadakis para 20 teaches devices are equipped with cameras or other sensors that identify people within a first public viewing area (e.g., a particular portion of an airport) of a first public display device at an initial time; see also Shimy further teaches a personal device as a mobile device recognized within the proximity of a set-top box and then the user associated with the mobile device is identified, from a local or remote storage, as being the current audience information to select displayable content (para 43, 51-54, 58-59, 104). Shimy teaches that based on the viewers that are detected in proximity to a television, the invention uses viewer profiles in order to determine the information that is displayed on the television. See also Fan para 132 the display devices are equipped with cameras or other sensors that identify people within a first public viewing area (e.g., a particular portion of an airport) of a first public display device at an initial time. Regarding claim 25, “wherein the stored computer instructions are further configured to instruct the processor to determine physical attribute information for the audience member, and wherein media content for the audience member is determined at least partially based on the physical attribute information for the audience member” is further rejected on obviousness grounds as discussed in the rejection of claims 21-24 wherein Krasadakis para 21, 48 teaches video of the user may be captured that identifies the person the gender, race, height, and build of a user that can be used to select content to present while the person is in the viewing area of the display device. See also Shimy para 50, 54, 86, 103 detecting users within the proximity based on user physical characteristics comprising facial characteristics or shape. Regarding claim 26, “wherein the stored computer instructions are further configured to instruct the processor to determine personal information for the audience member, and wherein media content for the audience member is determined at least partially based on the personal information for the audience member” is further rejected on obviousness grounds as discussed in the rejection of claims 21-25 wherein Krasadakis para 63-64 further teaches user profile includes users personal information; See also Shimy para 119. Regarding claim 27, “wherein the stored computer instructions are further configured to instruct the processor to: select playback parameters for use during playback of the media content to the audience member” is further rejected on obviousness grounds as discussed in the rejection of claims 21-26 wherein Shimy teaches (para 130-133 – determines that particular content cannot be provided based on media ratings and the parameters are to display obscured content; Fig. 10-13 media content listings correspond to identifier of determined media content; para 125 warning of playback parameters). As such, Shimy teaches each particular media content has associated playback parameters; See also Krasadakis para 28 content displayed comprises presentation parameters. Regarding claim 28, “wherein the playback parameters include at least one of volume, foreign language, closed captioning, or specialized content for a person with a disability” is further rejected on obviousness grounds as discussed in the rejection of claims 21-27 wherein Shimy teaches (para 130-133 – determines that particular content cannot be provided based on media ratings and the parameters are to display obscured content; Fig. 10-13 media content listings correspond to identifier of determined media content; para 125 warning of playback parameters includes an interface configuration to provide a prompt which would be understood by a person of ordinary skill in the art that media ratings comprising whether the content comprises adult language; See Shimy para 162 discussing profane/adult language). Regarding claim 29, “wherein the media content playback device is a smartphone, computer, or personal digital device” is further rejected on obviousness grounds as discussed in the rejection of claims 21-28 wherein Krasadakis para 29-31 discloses display device may take the form of a mobile computing device or any other portable device, such as, for example but without limitation, computer monitor, an electronic billboard, a projector, a television, a see-through display, a virtual reality (VR) device or projector, a computer, a kiosk, a tabletop device, a wireless charging station, and electric automobile charging stations. Furthermore, the display device 100 may alternatively take the form of an electronic component of a public train, airplane, train, or bus (e.g., a vehicle computer equipped with cameras or other sensors disclosed herein). Regarding claim 30, “wherein the recognition module is configured to wirelessly identify the audience member” is further rejected on obviousness grounds as discussed in the rejection of claims 21-29 wherein Krasadakis para 20 teaches a person may be “personally” recognized, meaning the person is individually identified, through facial recognition, body recognition, device communication (e.g., wireless communication with the user's smart phone or wearable device), speech recognition, user profile identifier, social media handle, user name, speech, biometrics (e.g., eye pattern, fingerprints, etc.), or any other technique for personally identifying the user. For example, a camera coupled to a display device may capture video of a person, and that video may be analyzed—either locally or via a web service—to recognize the person. See also Shimy para [0051] Detecting circuitry 307 may also be capable of detecting and/or identifying a user or users based on recognition and/or identification of a media device (e.g., a mobile device, such as an RFID device or mobile phone) that may be associated with the user or users. Detecting circuitry 307 may recognize and identify such a device using any suitable means, for example, radio-frequency identification, Bluetooth, Wi-Fi, WiMax, internet protocol, infrared signals, any other suitable IEEE, industrial, or proprietary communication standards, or any other suitable electronic, optical, or auditory communication means. Regarding system claims 31-40, the claims are grouped and rejected with the system claims 21-30 because the elements of the system are met by the disclosure of the apparatus and methods of the reference(s) as discussed above, and because the elements of the system are easily converted into elements/steps of an apparatus/method by one skilled in the art. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFONSO CASTRO whose telephone number is (571)270-3950. The examiner can normally be reached on Monday to Friday from 10am – 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALFONSO CASTRO/ Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Mar 08, 2025
Non-Final Rejection — §103
Mar 28, 2025
Response Filed
Apr 19, 2025
Final Rejection — §103
Jul 24, 2025
Response after Non-Final Action
Aug 12, 2025
Request for Continued Examination
Aug 16, 2025
Response after Non-Final Action
Sep 06, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Apr 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12563253
METHOD OF BROADCASTING REAL-TIME ON-LINE COMPETITIONS AND APPARATUS THEREFOR
2y 5m to grant Granted Feb 24, 2026
Patent 12563240
IN-TESTING QUALITY OF EXPERIENCE OF CONNECTIVITY BETWEEN AIRCRAFT AND GROUND CONTENT SERVERS
2y 5m to grant Granted Feb 24, 2026
Patent 12532036
MULTI-CAMERA MULTIVIEW IMAGING WITH FAST AND ACCURATE SYNCHRONIZATION
2y 5m to grant Granted Jan 20, 2026
Patent 12464194
SHARING CONTENT ITEM COLLECTIONS IN A CHAT
2y 5m to grant Granted Nov 04, 2025
Patent 12439124
METHODS, SYSTEMS, AND MEDIA FOR PRESENTING RECOMMENDED MEDIA CONTENT ITEMS BASED ON USER NAVIGATION SIGNAL
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
69%
With Interview (+18.9%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 435 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month