DETAILED ACTION
The present application is being examined under the pre-AIA first to invent provisions.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/29/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-15, 17-21 have been considered but are moot in view of new ground of rejection.
With respect to Applicant’s argument that no new matter has been added (Applicant’s remark on page 5), Examiner respectfully disagrees. The originally-filed specification does not have support for newly added limitation “the processor generates …by replacing a first face present in the message with a face of the viewer present in the input data” as recited in amended claim 1.
With respect to double patenting rejection, Applicant indicates that Applicant is prepared to enter a Terminal Disclaimer in the present application…. (page 7). However, the terminal disclaimer has not been filed. Therefore, the double patenting rejection is maintained.
Applicant further argues Archibong does not disclose replacing a first face present in the message (e.g., an advertisement or a TV show, as asserted by the Office Action) with a face of the viewer present in the input data. Archibong makes it clear that the social area 1140 placed on the viewing screen is not to be placed over "important area[s]" (see, e.g., Archibong at paragraphs 0131-0137. Further, Archibong clearly and consistently identifies that faces are an important area, for example at . 0134 ("determine important areas of incoming video frame 1120 by determining whether any faces are shown on the display"), 0136 ("avoid overlapping the singer's face"), 0144 ("the overlay area may be placed to [a]void any determined important areas of the first video stream such as faces or text"), 0221 ("chat area 2310 may be located to avoid overlapping faces"), and at claim 8 "wherein the one or more important areas comprise... an area comprising one or more faces." Accordingly, Archibong does not disclose each element of claim 1 as amended, and is not compatible with an embodiment the explicitly replaces one of the faces in the underlying advertisement or TV show (page 6).
In response, It is noted that the amended claims do not recite face of the viewer does not block or does not overlay on face of singer’s face or important area. Instead, the amended claims recite “replacing a first face present in the message with a face of the viewer present in the input data”. This limitation is interpreted as a concept of replacing a face (could be real face, avatar, or photo/image/symbol that represents a “face” in a message/comment with a face (could be an avatar, symbol, image, photo or real face) of a user/viewer present/included/provided in the input data of the user/viewer (see for example, the concept in Shimy: US 20110069940: figures 9-11, discloses replacing face of the user that leave the area/no longer active with the face of new active user) or see Swaminathan et al. (US 20140168056: for example, figure 10) discloses replacing face in a message/screen with face of user/participant in the input data, or see the teaching in US 8839306 (figure 6, col. 8, lines 10-55), US 20120159527: see for example, 11A-11C: paragraphs 0038-0041, 0083, 0094).
Although Examiner does not agree with Applicant’s remarks that Archibong does not disclose the newly added feature, to provide a clear support that the limitation of replacing a first face present in the message with a face of the viewer present in the input data as newly added limitation is well-known in the art, the rejection relies on newly cited reference for this teaching as discussed below.
It is noted that non-functional descriptive material does not patentably distinguish over prior art that otherwise renders the claims unpatentable. See for example, MPEP 2111.05, MPEP 2112.01(III). See also In re Ngai, 367 F.3d 1336, 1339 (Fed. Cir. 2004); Exparte Nehls, 88 USPQ2d 1883, 1887-90 (BPAI 2008) (precedential) (discussing cases pertaining to non-functional descriptive material) see also BPAI’s decision in Appeal 2009-010851 (for Ser. No. 10/622,876) or BPAI’s decision in Appeal 2011-011929 (for Ser. No. 11/709,170), pages 6-7. In this case, a particular type of information such as “a first face in the message”, “a face of the viewer present in the input data” could be considered as non-functional descriptive material and are not required to give patentable weight because these particular types of data do not functionally change the structure or operation of a system that disclose replacing a particular type of data with another type of data. The limitations “a first face in the message” “a face of the viewer present in the input data” are only given patentable weight of two types of data.
Although non-functional descriptive material are not required to be considered, all claim limitations including non-functional descriptive material are known by prior art as discussed below.
For the reasons given above, rejections of claims 1-15, 17-21 are discussed above.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-15, 17-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Amended independent claim 1 recites limitation “replacing a first face present in the message with a face of the viewer present in the input data” which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention (see “response to arguments” above).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-15, 17-20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Archibong et al. (US 20140068692) in view of either Swaminathan et al. (US 20110168056) or Shimy et al. (US 20110069940) or Perez et al. (US 20120159527).
Note: all documents that are directly or indirectly incorporated by references in Archibong (see for example, paragraphs 0086, 0235) or Shimy or Perez (e.g., paragraphs 0038, 0041) are treated as part of the specification of the respective reference ( see M.P.E.P 2163.07 b).
Regarding claim 1, Archibong discloses a multi-media presentation system to engage a viewer, comprising:
a storage subsystem that stores at least on archetype, each archetype comprising a configuration data related to features of a message related to an item or service for purchase (subsystem comprises user profile storage, content object classification, action logger, etc. that stores configuration data related to references, interest level, shared information, recommended information, etc. related to features of an item, a program, a product, advertising message, title, or service purchase – see include, but are not limited to, figures 2, 8, 39, paragraphs 0052-0056, 0059, 0071, 0073-0076, 0112-0113, 0118, 0123, 0155, 0161, 0186-0187 and discussion in “response to arguments” above);
an input subsystem that collects data related to a viewer when the viewer is within a predetermined proximity of the system (see include, but are not limited to, discussion in “response to arguments” above and include, but are not limited to, paragraphs 0119, 0149-0152, 0160-0161, 0167, 0171-0174);
a processor in communication with the input subsystem and the storage subsystem, wherein the processor analyzes the collected input data to project which of the input data is likely to be of interest to the viewer, and wherein the processor further selects the message from the storage subsystem likely to be interest to the viewer based on the input data, and wherein the processor generates a customized multi- media presentation that incorporates the projected input data into the message by replacing data/object presents in the message with data of the user present in the input data (processor and/or CPU in communication with the input subsystem and the storage subsystem, the CPU/processor analyzed the collected input data of post, selection, shared information, reaction, behavior, etc. of user(s) to project/predict/recommend/suggest which of the input data of post, shared, selections, behavior, gesture, recommendation, etc. is likely to be interest to the viewer, and wherein the processor/CPU selects social content comprising advertisement/product/commercial message or program title from the storage subsystem likely to be interest by the viewer based on user input, gazing, selection, to recommend/suggest to the viewer/user such as advertisement/program that is previously selected by user or recommended/selected/shared by friends, and wherein the processor/CPU generates a recommended/customized multi-media presentation that incorporates/combines the projected input data such as posted content, shared data, chat data, input data of user, friends, chat, etc. into social data comprising the advertising message, commercial, product, title of program other user watched, etc. by replacing the data/symbol in the message/comments with advertisement/social data with the data of user/viewer/present in of post, chat, captured image, shared data, etc. related to the user in real time as recommended/customized data to the user -see discussion in the “response to arguments” above and, see include, but are not limited to, figures 2, 9, 11, 16, 18, 20-23-26, 35, 39, paragraphs 0053-0054, 0059, 0074, 0113, 0115, 0160-0161, 0165, 0176-0177, 0187, 0221, 0225, 0316.; and
a display subsystem in communication with the processor that displays the customized multi-media presentation to engage the viewer (display subsystem such as display of TV and/or mobile device in communication with processor of social content system that displays the customized/recommended the item, product, advertisement, pay per view program, or any service offered for sale (to sell or purchase)– see include, but are not limited to, figures 2,4, 9, 11, 16, 19, 29 and discussion in “response to arguments” above).
Archibong does not explicitly disclose replacing first face present in message with a face of user present in input data.
Swaminathan et al. (US 20110168056) or Shimy et al. (US 20110069940) or Perez et al. (US 20120159527) discloses replacing a first face present in message with a face of a user presents in input data (see include, but are not limited to, Shimy: US 20110069940: figures 9-11, discloses replacing face of the user that leave the area/no longer active with the face of new active user) or see Swaminathan et al. (US Swaminathan: for example, figure 10 and description part corresponding to figure 10, which discloses replacing face in a message/screen with face of user/participant in the input data, or see Perez: see for example, 11A-11C: paragraphs 0038-0041, 0083, 0094 and discussion in “response to arguments’ above).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Archibong with the teaching including replacing a first face with a face of user present in input data as taught by either Shimy/Swaminathan/Perez in order to yield predictable result of recreating for the viewer an experience of watching the multimedia content live with other users (paragraph 0002) or see Swaminathan (paragraph 0072).
Regarding claim 2, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the input data is collected through a visual scanning device, and at least one of: a range interrupt device, or an audio device (audio/microphone, camera, sensor, range proximity sensor/Wifi, Bluetooth, etc. – see include, but are not limited to, Archibong: paragraphs 0119, 0160,0166, 0167, 0171-0172).
Regarding claim 3, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the collected input data is analyzed based on age, grade, gender, location, demographic, personal characteristic, group affiliation or relationship, or psychological behavior, and the customized multi-media presentation is generated to attract or repulse the viewer based on analyzing of the collected input data (collected data is analyzed based on age, demographic, behavior, etc. and customize/recommend content based on age/child, behavioral collected data, etc. - see include, but are not limited to, Archibong: paragraphs 0053-0054, 0078, 0160-0161, 0170, 0175-0176, 0252).
Regarding claim 4, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the customized multi-media presentation is incorporated into a kiosk, transaction machine, vending machine, retail display, decor, mannequin, creature care system, security system or phone system (e.g., kiosk, security system, phone system with mobile phone device - see include, but are not limited to, Archibong: figures 1, 4, 9, paragraph 0342, 0348, 0052).
Regarding claim 7, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the input subsystem is operable to communicate with at least one additional multi-media presentation system (communicate with at least one additional-multimedia presentation system at other location – see include, but are not limited to, Archibong: figures 1, 3, 4, 33).
Regarding claim 8, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the processor is operable to detect words or objects from the collected input data to integrate into the customized multi-media presentation (detect words using microphone or capture objects using camera 970– see include, but are not limited to, Archibong: figure 9, paragraphs 0149, 0160, 0171-0172, 0221).
Regarding claim 9, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the input subsystem comprises a control panel where an administrator, provider, or a viewer can enter or select data or organization (input/panel where a viewer/user can enter or select content/chat/vote – see include, but are not limited to, Archibong: figures 4, 9, 19, 21, 23, 27).
Regarding claim 10, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the collected input data is received from a remote device (remote device or mobile device – see include, but are not limited to, Archibong: figures 4, 9, 19, 21, 23, 27).
Regarding claim 11, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the input subsystem further comprises communication through remote connection to administer, control, adjust or monitor the multi-media presentation system remotely (communication through a remote connection to control, adjust or monitor, authorize the presentation system remotely from serve rand/or other location- see include, but are not limited to, Archibong: figures 1-2, 4, 8-9).
Regarding claim 12, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the processor incorporates a vote, a poll, a choice, or a test answer from the input subsystem into the customized multi-media presentation (see include, but are not limited to, Archibong: figures 21, 27, paragraphs 0136, 0206, 0215).
Regarding claim 13, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the input subsystem is operable to identify a physiological or psychological change in the viewer over time or in comparison to a group (see include, but are not limited to, Archibong: paragraphs 0054-0055, 0059. 0160, 0258).
Regarding claim 14, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, the system incorporated with a device further capable of physical actions performed on the item or service as part of the customized multi-media presentation (physical actions such as selection, engagement performed on the item or service as part of the customized media presentation – Archibong: figures 7, 21-22, paragraphs 0052, 0054, 0160-0161, 0100).
Regarding claim 15, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein input data related to the viewer comprises an image, photo, post, chat, video of the viewer and an environment about the viewer such as audio sound, laugher, etc. captured by camera, microphone, etc. (see include, but are not limited to, see include, but are not limited to, Archibong: figures 10-12, 23, 29, 35, paragraphs 0115-0116, 0128-0130, 160-0161, 0171-0174). Thus, the input data of interest to the viewer comprises an image of viewer and environment about the viewer so that the image/photo and location/sound of the user captured by camera, microphone, GPS, etc. can be shared with other user in chat session or node of social content.
See also Deweese (US 20050262542) for providing video image of user and environment of user in video chat as discussed above.
Regarding claim 17, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the collected input data comprises at least one of: an object in the environment touched by the viewer, an object looked at by the viewer, an object moving the fastest, or an object or a pattern of color most distinct from other visual data in the environment of the viewer (object such as program, iPhone, faces, etc. looked at or focused or used by the viewer or other visual data such as iPhone, other users in the room, etc. – see include, but are not limited to, Archibong: paragraphs 0100, 0171-0172, 0176, 0177, 0187) .
Regarding claim 18, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the viewer comprises a closest person within a set distance from the display subsystem (user proximate or closest person within a set distance from /proximate to the display subsystem/television – see include, but are not limited to, Archibong: figures 13, 15, paragraphs 0119, 0166-0171,0278, 0324).
See also Hildreth (20090138805: paragraphs 0010, 0142,0144) for the teaching.
Regarding claim 19, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the collected input data comprises at least one of: a gesture, a body movement, a spoken word, or a sound of the highest decibel level of the viewer (e.g., gesture, body movement, etc. – see include, but are not limited to, Archibong: paragraphs 0160, 0176-0177, 0225) .
Regarding claim 20, Archibong in view of either Shimy/Swaminathan/Perez discloses the multi-media presentation system of claim 1, wherein the collected input data comprises at least one of a gesture or a proximity that indicates a relationship between the viewer and another person (gesture, share, selection or proximity that indicates relationship/friends or in close location between the viewer and another person/friend – see include, but are not limited to, Archibong: paragraphs 0072, 0116, 0119, 0160, 0165, 0171-0172, 0177).
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 5-6 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Archibong et al. (US 20140068692) in view of either Shimy/Swaminathan/Perez as applied to claim 1, and further in view of Lindsay (US 20110178812).
Regarding claim 5, Archibong discloses the multi-media presentation system of claim 1. Archibong does not explicitly discloses the system incorporated with a vending machine and the vending machine further comprising a dispenser operator to dispense an item or service with the customized multi-media presentation.
Lindsay discloses system incorporated with a vending machine (e.g., vending machine 195) and the vending machine further comprising a dispenser operable to dispense an item or service with the customized multi-media presentation (see include, but are not limited to, Lindsay: figures 1-4, para. 0026).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the output device of Archibong to comprise a vending machine with dispenser as taught by Lindsay in order to dispense a product based on received information from user (see para. 0026).
Regarding claim 6, Archibong discloses the multi-media presentation system of claim 1. Archibong in view of Conrad does not explicitly disclose system incorporated with a transaction kiosk and the kiosk further comprising a dispenser operable to dispense an item or service with the customized multi-media presentation.
Lindsay discloses system incorporated with a transactional kiosk and the kiosk (e.g., kiosk 110) further comprising the dispenser operable to dispense an item or service with the customized multi-media presentation (see include, but are not limited to, Lindsay: figures 1-4, para. 0026).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the output device of Archibong to comprise a vending machine with dispenser as taught by Lindsay in order to dispense a product based on received information from user (see para. 0026).
Claim 21 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Archibong et al. (US 20140068692) in view of either Shimy/Swaminathan/Perez as applied to claim 1 above, and in view of Conrad et al. (U.S 2013/0298146 A1).
Regarding claim 21, Archibong discloses a multi-media presentation system of claim 1, wherein the input subsystem collects data related to the viewer when the viewer is within a predetermined proximity of the system (see include, but are not limited to, discussion in “response to arguments” above and include, but are not limited to, paragraphs 0119, 0149-0152, 0160-0161, 0167, 0171-0174);
Archibong disclose using device identifier, Bluetooth , location or user movement, image of user(s) to identify user (see include, but are not limited to, figures 15, 16, 18, paragraphs 0054, 0078, 0150, 0160, 0166-0167, 0167, 0170-0172, 0175-0176, 0180). Obviously, the data related to viewer is collected without identifying an individual identify of the viewer (e.g., identify based on user device identifier, location, user activity, age, etc. but not the identity such as user name, date of birth, etc. of the user – see discussion in “response to arguments” in the previous office action).
Additionally and/or alternatively, Conrad discloses input subsystem that collects data related to a viewer is within a predetermined proximity of a system without identifying an individual identity of the viewer (e.g., audience-sensing device operable to passively collect input data about viewer without active input by the viewer and collecting data of age (e.g., viewer in men aged 18-34, height of user, young child, male or female, or user raise hand, laugh, etc. without identifying an individual identify of viewer when the user is within predetermined proximity of the system/TV – see include, but are not limited to, paragraphs 0005, 0025, 0030-0036, 0121).
It would have been obvious to one of ordinary skill in the art at the time the invention was made to modify Archibong with teachings including collecting input data without identifying an individual identify of the viewer as taught by Conrad to yield predictable results such as to improve accuracy and convenience for collecting data of user based on environment/activity (paragraphs 0002-0005) or to reduce parties/hacker to collect user identity for user’s unwanted-purpose.
Alternatively to Archibong, it is noted that Conrad also disclose the features recited in claim 1 as discussed in the previous rejection including.
storage subsystem that stores an archetype comprising a configuration data related to features of an advertising message related to an item or service for purchase (e.g., CMR - figures 2-4, 6-8, paragraphs 0005, 0030-0036, 0047);
a processor in communication with the input subsystem (audience-sensing device) and the storage subsystem (e.g., CMR), the processor analyzes the collected input data to project which of the input data is likely to be of interest to the viewer, wherein the processor further selects the advertising message from the storage subsystem likely to be interest to the viewer based on the input data, the processor generates a customized multimedia presentation incorporates the projected input data into the advertising message by replacing matched data of the advertising message with the input data (incorporated the projected/recommended input data/reaction data into the advertisement message by replacing/updating the matched data of the advertising message with the input data/reaction data - see include, but are not limited to, figures 2-4, 6-8, 11, 14-16, paragraphs 0005, 0025, 0030-0036, 0049, 0121, 0143-0154, 0159-0167, 0184).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-15, 17-21 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 11,553,228 Although the conflicting claims are not identical, they are not patentably distinct from each other because the instant claims 1-21 is an obvious variation of the invention defined in the patent claims 1-17 because for limitations in the instant claims such as “item or service offered for sale…”, a display subsystem adjacent to a dispenser of said item or search offered for sale…” that are not recited in patent claims are known by prior art (see for example, prior arts discussed in the rejection above). It would have been obvious to one of ordinary skill in the art combine in patent claims with the well-known teachings as taught in the prior art cited above for the benefit as discussed in the prior art rejection above.
Allowance of claims 1-15, 17-21 would result in an un-warranted timewise extension of the monopoly granted for the invention as defined in claims 1-17 of Patent No. 11,553,228.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mattingly et al. (US 20100306671) discloses avatar integrated shared media selection comprising relacing faces (see for example, figure 5F).
Robert et al. (US 8839306) discloses method and apparatus for presenting media programs and replacing faces/avatar that represent face of user – see for example, figure 6).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN SON P HUYNH/Primary Examiner, Art Unit 2426
November 4, 2025