Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the amendments filed on October 9th, 2025. Claims 1, 13, and 15 have been amended. Amended claims 1, 13, and 15 have been fully considered but are not persuasive. Claims 1-20 remain rejected in the application.
Response to Arguments
In response to applicant’s arguments regarding Aslam failing to disclose “claim elements” as recited in claim 1, argument has been fully considered but is not persuasive. “Claim elements” are currently not present in claim 1 and examiner suggests further clarifying the argument.
In response to applicant’s arguments regarding Aslam failing to disclose that an AR engine is a graphics rendering engine as well as the office equating the AR engine with graphics rendering engine, arguments have been fully considered but are not persuasive. It is well known in the art that an AR engine is a graphics rendering engine.
In response to applicant’s arguments regarding transmission via a network connection and transmitting augmented video, argument has been fully considered but is not persuasive. Office action has been updated to reflect the amended network transmission limitation. Aslam explicitly teaches utilizing networks for transmission.
In response to applicant’s arguments regarding the dependent claims. Arguments have been fully considered but are not persuasive. Due to the independent claim’s rejection being maintained, the rejections for the dependent claims are maintained.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 2, 3, 4, 9, 10, 11, 13, 14, 15, 16, 17 and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aslam et. al (U.S. Patent Publication No. 2022/0262078).
Regarding claim 1, Aslam discloses a method comprising: receiving, by a graphics rendering engine and by way of a network connection with a first communication endpoint separate from the graphics rendering engine, a real-time video communication generated at the first communication endpoint for communication with a second communication endpoint separate from the graphics rendering engine (interpreted as the AR/graphics engine sits on the network and gets a live video feed coming from user #1 so that the feed can be shown to user #2) [Aslam: 0031 “System 10 includes AR engine 14. AR engine 14 may comprise an audio / video teleconferencing system. The audio / video teleconferencing system comprises system that enables audio and video communication between remote technician 20 and expert 22 located at headquarters or some other location separate from remote technician 20. The audio / video teleconferencing system transmits data over data network 12.”](teaches the AR engine (graphics rendering engine) is the intermediary teleconferencing unit; it receives the live A/V stream form one endpoint (technician 20) over network 12 for delivery to the other endpoint (expert 22).); accessing, by the graphics rendering engine from a data store separate from the first and second communication endpoints, a graphical representation of an object (interpreted as the engine pulls a 3-d model or other asset from its own library that is not stored on either user’s devices)[Aslam: 0047 “At step 316, the AR device retrieves the AR model from the AR engine. The AR model represents a real world object in the field of view of the second user. For example, AR device 18 may receive an AR model from AR engine 14.”](Aslam teaches the AR device (graphics rendering engine) retrieves an AR model from AR engine); generating, by the graphics rendering engine as the real-time video communication is being received, an augmented video communication featuring the graphical representation of the object overlaid onto a background texture depicting the real-time video communication (interpreted as, while the live video is streaming, the engine overlays the model on top, producing an AR mixed frame)[Aslam: 0044 “The AR device comprises a display configured to overlay virtual objects onto a field of view of a user in real - time.”]; and transmitting, by the graphics rendering engine by way of a network connection with the second communication endpoint [Aslam: 0016 “through any type of network”] and in real time as the augmented video communication is being generated by the graphics rendering engine (Aslam: 14; Fig. 1 “AR engine”)(teaches AR engine which is a graphics rendering engine), the augmented video communication to the second communication endpoint (interpreted as the mixed AR feed is streamed out immediately to user #2)[“Aslam: 0039 “Doctor ( e.g. , expert 22 ) is in communication with EMT ( e.g. , remote technician 20 ) via data network 12 and AR device 18. In particular embodiments, doctor 22 may receive video and audio from AR device 18 to help diagnose patient 24.”](Aslam teaches the AR feed being set to the remote doctor which corresponds to the second endpoint).
Regarding claim 2, Aslam discloses the method of claim 1, further comprising: transmitting, by the graphics rendering engine in real time as the augmented video communication is being generated, the augmented video communication to the first communication endpoint (interpreted as, while the mixed reality video is still being built, the engine also streams that same feed back to the originating user)[Aslam: 0004 “The AR device may be further operable to transmit the manipulations performed on the AR model by the second user to the first user.”](teaches sending the manipulated AR model from the engine/AR device to the first user); receiving, by the graphics rendering engine from the first communication endpoint, manipulation instruction data indicating a manipulation of the graphical representation of the object that is to be performed within the augmented video communication (interpreted as the first user sends commands telling the engine how to change the virtual object) [Aslam: 0050 “At step 322, the AR device may receive input from the first user to manipulate the AR model.”]; and performing, by the graphics rendering engine in real time as the augmented video communication is being generated and transmitted, the manipulation of the graphical representation of the object indicated by the manipulation instruction data (interpreted as the engine applies those commands so the object moves, rotates, resizes in the outgoing AR stream) [Aslam: 0050 “At step 324, the AR device may manipulate the AR model according to the received input. For example, AR device 18 manipulates AR projection 16 to display to EMT 20 the manipulations requested by doctor 22.”](teaches the engine executes the manipulation immediately).
Regarding claim 3, Aslam discloses the method of claim 2, further comprising: receiving, by the graphics rendering engine from the second communication endpoint subsequent to a transfer of object manipulation control from the first communication endpoint to the second communication endpoint, additional manipulation instruction data indicating an additional manipulation of the graphical representation of the object that is to be performed within the augmented video communication (interpreted as: after the first user has had control, the system now gets new commands from the second user)[Aslam: 0050 “At step 322, the AR device may receive input from the first user to manipulate the AR model.”][Aslam: 0051 “At step 326, the AR device may receive input from the second user to manipulate the AR model.”]; and performing, by the graphics rendering engine in real time as the augmented video communication is being generated and transmitted, the additional manipulation of the graphical representation of the object indicated by the additional manipulation instruction data (interpreted as the engine immediately applies the second user’s commands so the live AR stream shows the change) [Aslam: 0051 “At step 328, the AR device may manipulate the AR model according to the received input.”] (teaches the engine carrying out the second user’s instructions “in real time”).
Regarding claim 4, Aslam discloses the method of claim 1, further comprising: transmitting, by the graphics rendering engine in real time as the augmented video communication is being generated, the augmented video communication to the first communication endpoint (interpreted as: while the mixed-reality feed is still being built, it is streamed back to the very user who started the call)[Aslam: 0039 “Doctor ( e.g. , expert 22 ) is in communication with EMT ( e.g. , remote technician 20 ) via data network 12 and AR device 18. In particular embodiments, doctor 22 may receive video and audio from AR device 18 to help diagnose patient 24.”](because the doctor (first endpoint) receives the live video generated by AR device 18, the reference teaches real time return transmission of the augmented stream); receiving, by the graphics rendering engine from the first communication endpoint, annotation data indicating an annotation that is to be applied to the graphical representation of the object within the augmented video communication; and applying, by the graphics rendering engine in real time as the augmented video communication is being generated and transmitted, the annotation indicated by the annotation data [Aslam: 0039 “Doctor 22 may annotate a whiteboard visible to EMT 20 via AR device 18 to assist EMT 20 with treatment of patient 24. Doctor 22 may project annotations onto patient 24 via AR device 18 to assist EMT 20 with treatment of patient 24. As EMT 20 repositions patient 24, the projected annotations may reposition with patient 24.”](teaches both limitations: the act of the doctor annotating shows that input (annotation data) originates at the first endpoint and is conveyed to the AR system for application, further teaching that the annotation being applied to (and tracking with) the live scene in real time).
Regarding claim 9, Aslam discloses the method of claim 1, wherein: the first communication endpoint is associated with a contact center agent, the second communication endpoint is associated with a customer entity, and the augmented video communication is generated and transmitted in furtherance of a customer support transaction between the contact center agent and the customer entity (interpreted as one endpoint belongs to a support professional, the other to the customer; the live, augmented video is used to give that customer real-time help) [Aslam: 0031 “The audio / video teleconferencing system comprises system that enables audio and video communication between remote technician 20 and expert 22 located at headquarters or some other location separate from remote technician 20.”](the expert 22 corresponds to a contact center agent, while the remote technician corresponds to the customer entity; clearly teaches streaming audio-video plus AR overlays used to aid the customer); and the object is a particular model of an electronic device that is equivalent to a device possessed by the customer entity (interpreted as the overlaid model matches the physical device the customer owns)[Aslam: 0035 “Augmented reality projection 16 may , for example , represent a real world object such as a machine or device under repair. The AR projection includes instructional information regarding the real world object.”](teaches the AR projection can be the same machine or device under repair which is the customers device that is under repair).
Regarding claim 10, Aslam discloses the method of claim 9, further comprising: receiving, by the graphics rendering engine from the first communication endpoint, instruction data indicating that a preproduced animation is to be applied to the graphical representation of the object within the augmented video communication to demonstrate a particular action to be performed by the customer entity with respect to the electronic device possessed by the customer entity (interpreted as the rendering engine gets control data coming from the agents endpoint that tell it to play an already created animation on the model so the customer can see how to perform a task on their own device)[Aslam: 0052 “At step 330 , the AR model represents a real world object and AR device receives audio - video instructions over the audio - video connection for the second user to manipulate the real world object. For example, in coordination with any manipulations performed on the AR model, doctor 22 may also send audio and / or video instructions to EMT 20 for assisting patient 24.”](teaches the engine receiving audio-video instructions from the agents endpoint and further teaches that these instructions can be movies or animations demonstrating procedures); and applying, by the graphics rendering engine in real time as the augmented video communication is being generated and transmitted, the preproduced animation to the graphical representation of the object as part of the customer support transaction (interpreted as, while the call is live, the engine actually overlays/plays that stored animation on the model so the customer sees the motion on screen, all in real time during the support session) [Aslam: 0049 “the AR device displays on the determined surface an AR projection based on the AR model to the second user via the display. The AR projection includes instructional information regarding the real world object.”][Aslam: 0060 “Display 706 is configured to present visual information to a user in an augmented reality environment that overlays virtual or graphical objects onto tangible objects in a real scene in real - time.”](teaches real time AR projection that overlays instructional content onto the model as the video stream is sent. Further teaching that the overlaid content can include animations demonstrating procedures which are shown in real time within the AR environment).
Regarding claim 11, Aslam discloses the method of claim 9, further comprising accessing, by the graphics rendering engine from the data store, a graphical representation of an additional object (interpreted as the engine pulls yet another model out of storage so it can be shown during the support session) [Aslam: 0047 “At step 316, the AR device retrieves the AR model from the AR engine.”][Aslam: 0032 “AR engine 14 comprises an AR system for trans mitting and / or receiving augmented reality models with AR device 18.”](teaches the AR engine supplies AR models on demand, because the procedure can be repeated, the step expressly teaches retrieving each model, including any additional one into the session); wherein: the augmented video communication generated by the graphics rendering engine further features the graphical representation of the additional object overlaid onto the background texture alongside the graphical representation of the object (interpreted as while the live camera feed is streaming, the new model is dropped into the same view next to the first model) [Aslam: 0003 “The AR device comprises a display configured to overlay virtual objects onto a field of view of the user in real - time”](teaches real time overlay of multiple virtual objects which includes the first and the additional models onto the same live video background); and the additional object is a different particular model of the electronic device that is equivalent to an additional electronic device possessed by the customer entity and that is presented to demonstrate a particular action to be performed by the customer entity with respect to both the electronic device and the additional electronic device as part of the customer support transaction (interpreted as the second model represents another device the customer owns; it is shown so the agent can walk the customer through a procedure that affects both devices)[Aslam: 0041 “in some embodiments an information technology ( IT ) specialist may remotely instruct a data center employee how to install and cable networking equipment . The IT specialist may annotate the locations of particular slots in a server shelf to install particular server blades and / or particular interface connections to connect particular cables”](teaches the server shelf (first device) and the different server blades (additional devices of the same class) are both modeled in AR. The expert’s annotations demonstrate how the customer should deal with both of their devices).
Claims 13 and 15 are system claims corresponding to the method claim 1 above. Aslam further discloses a processor (Aslam: 702; Fig. 4). Thus, claims 13 and 15 are rejected for the same reason as claim 1.
Claims 14 and 16 are system claims corresponding to the method claim 2 above. Thus, claims 14 and 16 are rejected for the same reason as claim 2.
Claim 17 is a system claim corresponding to the method claim 4 above. Thus, claim 17 is rejected for the same reason as claim 4.
Claim 19 is a system claim corresponding to the method claim 9 above. Thus, claim 19 is rejected for the same reason as claim 9.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5, 6, 7, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Aslam et. al (U.S. Patent Publication No. 2022/0262078), in view of Szymczyk et. al (U.S. Patent No. 8,275,590).
Regarding claim 5, Aslam discloses the method of claim 1, but fails to explicitly disclose the first communication endpoint is associated with a seller entity, the second communication endpoint is associated with a customer entity, and the augmented video communication is generated and transmitted in furtherance of a sales transaction between the seller entity and the customer entity; and the object is a particular model of an electronic device being offered for sale as part of the sales transaction.
However, Szymczyk discloses wherein: the first communication endpoint is associated with a seller entity, the second communication endpoint is associated with a customer entity, and the augmented video communication is generated and transmitted in furtherance of a sales transaction between the seller entity and the customer entity (interpreted as the video call is a real time sales presentation between a salesperson and a buyer)(Szymczyk: Col. 5, Lines 37 – 44 “The conferencing portion may display one or more other users of the system or similar system. The one or more other users displayed by the conferencing portion may be virtually trying on real-wearable items. A user of the virtual-outfitting inter face may interact, Such as by voice or text, with other users via the conferencing portion. This may give an enhanced sense of shopping with other users in disparate locations.”)(teaches the conferencing portion delivers the live video link between a remote “seller” and the “customer” (local user), expressly for shopping); and the object is a particular model of an electronic device being offered for sale as part of the sales transaction (interpreted as the item shown in AR is the exact model the shopper can buy) (Szymczyk: Col. 5, Lines 31-35 “The current item details portion may include one or more details of the real-wearable item that a user is currently trying on virtually using the system. Exemplary details that may be presented by the current item details portion include type, size, style, brand, Vendor, price, avail ability, and/or other details associated with real-wearable items.”)(teaches that item details are included in the presentation which clearly means the virtual item that is presented is the specific product for sale).
Aslam and Szymczyk are both considered to be analogous to the claimed invention because they are in the same field of real-time augmented-reality overlays in live, two-party video sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Aslam to incorporate Szymczyk’s teachings of using the same AR overlay for a seller-to-customer product demonstration and purchase workflow. The motivation for such a combination would provide the benefit of straightforward commercial utility: adding the proven “virtual try-on”, thereby enhancing customer engagement and experience.
Regarding claim 6, Aslam discloses the method of claim 5, but fails to explicitly disclose further comprising accessing, by the graphics rendering engine from the data store, a graphical representation of an additional object; wherein: the augmented video communication generated by the graphics rendering engine further features the graphical representation of the additional object overlaid onto the background texture alongside the graphical representation of the object; and the additional object is a different model of the electronic device and is offered for sale as an alternative to the particular model in the sales transaction.
However, Szymczyk discloses further comprising accessing, by the graphics rendering engine from the data store, a graphical representation of an additional object (interpreted as the engine fetches another item model)(Szymczyk: Col. 4, Lines 5-7 “The user may add, Subtract, reorder, scroll through, browse, and/or otherwise manage selected virtual-wearable items included in Such a queue.”)(fetching and queuing constitutes accessing extra graphical objects from storage); wherein: the augmented video communication generated by the graphics rendering engine further features the graphical representation of the additional object overlaid onto the background texture alongside the graphical representation of the object (interpreted as both items appear together in the live video)( Szymczyk: Col. 4, Lines 15-22 “The main display portion may include one or more images and/or video of the user virtually trying on one or more real-wearable items that correspond to one or more selected virtual-wearable items. In Such images and/or video, the one or more selected virtual-wearable items may be visually overlaid on the user in a position in which the user would normally wear corresponding real-wearable items.”)(teaches one or more are rendered simultaneously); and the additional object is a different model of the electronic device and is offered for sale as an alternative to the particular model in the sales transaction (interpreted as second item is an alternative for purchase)(Szymczyk: 19-26 “effectuate a displayed virtual-wearable item included in the main display portion being replaced by a different selected virtual-wearable item Such that the user appears to be wearing the different selected virtual-wearable item in the main dis play portion, to cycle through virtual-wearable items included in a queue of several virtual-wearable items, and/or to perform other”) (teaches swapping to a different product model for the user to decide which to buy).
Aslam and Szymczyk are both considered to be analogous to the claimed invention because they are in the same field of real-time augmented-reality overlays in live, two-party video sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Aslam to incorporate Szymczyk’s teachings of retrieving and displaying a second, different product model side-by-side with the first as an alternative choice for the customer. The motivation for such a combination is the well-recognized commercial benefit of letting a seller present multiple purchase options within the same live AR session, thereby increasing the chances of closing a sale and enhancing customer engagement and experience.
Regarding claim 7, Aslam discloses the method of claim 5, but fails to explicitly disclose further comprising accessing, by the graphics rendering engine from the data store, a graphical representation of an additional object; wherein: the augmented video communication generated by the graphics rendering engine further features the graphical representation of the additional object overlaid onto the background texture alongside the graphical representation of the object; and the additional object is a different model of the electronic device that is familiar to the customer entity and is presented to facilitate a comparison with the particular model while not being offered for sale as part of the sales transaction.
However, Szymczyk discloses further comprising accessing, by the graphics rendering engine from the data store, a graphical representation of an additional object (interpreted as engine grabs another model) (Szymczyk: Col. 4, Lines 5-7 “The user may add, Subtract, reorder, scroll through, browse, and/or otherwise manage selected virtual-wearable items included in Such a queue.”)(fetching and queuing constitutes accessing extra graphical objects from storage); wherein: the augmented video communication generated by the graphics rendering engine further features the graphical representation of the additional object overlaid onto the background texture alongside the graphical representation of the object (interpreted as both items appear together in the live video)(Szymczyk: Col. 4, Lines 15-22 “The main display portion may include one or more images and/or video of the user virtually trying on one or more real-wearable items that correspond to one or more selected virtual-wearable items. In Such images and/or video, the one or more selected virtual-wearable items may be visually overlaid on the user in a position in which the user would normally wear corresponding real-wearable items.”)(teaches one or more are rendered simultaneously); and the additional object is a different model of the electronic device that is familiar to the customer entity and is presented to facilitate a comparison with the particular model while not being offered for sale as part of the sales transaction (interpreted as the second item is just for comparison, not for sale)(Col. 3, Lines 56-59 “The item-search/selection module may be configured to provide Suggestions of one or more real-wearable items”)(teaches suggesting items to let the customer compare familiar models which corresponds to the claimed limitation).
Aslam and Szymczyk are both considered to be analogous to the claimed invention because they are in the same field of real-time augmented-reality overlays in live, two-party video sessions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Aslam to incorporate Szymczyk’s teachings of retrieving and displaying a familiar, comparison-only product model alongside the primary item. The motivation for such a combination is improving the customer’s decision-making process by letting the seller show a known reference product even when that reference item is not for sale, thereby increasing the customer confidence.
Claim 18 is a system claim corresponding to the method claim 5 above. Thus, claim 18 is rejected for the same reason as claim 5.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Aslam et. al (U.S. Patent Publication No. 2022/0262078), in view of Szymczyk et. al (U.S. Patent No. 8,275,590), in further view of Gupta et. al (U.S. Patent Publication No. 2021/0021901).
Regarding claim 8, Aslam and Szymczyk disclose the method of claim 5, but fail to explicitly disclose further comprising producing, by the graphics rendering engine, a machine-readable code that indicates the particular model of the electronic device; wherein: the augmented video communication generated by the graphics rendering engine further features the machine-readable code overlaid onto the background texture alongside the graphical representation of the object or as a replacement for the graphical representation of the object; and the machine-readable code is configured to facilitate the customer entity in completing the sales transaction subsequent to a presentation of the augmented video communication at the second communication endpoint.
However, Gupta discloses further comprising producing, by the graphics rendering engine, a machine-readable code that indicates the particular model of the electronic device [Gupta: 0046 “The QR code 122 may be generated by the merchant 112 for accepting payments from the user 102”]; wherein: the augmented video communication generated by the graphics rendering engine further features the machine-readable code overlaid onto the background texture alongside the graphical representation of the object or as a replacement for the graphical representation of the object (interpreted as the live video/advertisement shows the QR code super-imposed on the same frame that already shows the product image/video)[Gupta: 0052 “The advertisement includes at least one QR code 122 for purchasing the item.”][Gupta: 0044 “the QR code reader application renders a capture overlay frame on a display screen 108a of the user device 108. The capture overlay frame helps in capturing the QR code 122 displayed on the display screen 108a.”](teaches the QR code is visibly embedded with the product ad video); and the machine-readable code is configured to facilitate the customer entity in completing the sales transaction subsequent to a presentation of the augmented video communication at the second communication endpoint (interpreted as scanning/reading the code lets the viewer finish the purchase after seeing the video)[Gupta: 0044 “Once the QR code 122 is captured, the QR code reader extracts the set of QR data that may be used for processing the payment transaction for purchasing the advertised items 124 and 126.”](teaches that the QR code is configured to complete a sales transaction of the advertised item).
Aslam, Szymczyk, and Gupta are all considered to be analogous to the claimed invention because they are in the same field of augmented-reality overlays. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Aslam and Szymczyk to incorporate Gupta’s teachings incorporating QR codes to help with processing transactions. The motivation for such a combination is to improve the customer transaction experience, thereby increasing customer satisfaction.
Claims 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Aslam et. al (U.S. Patent Publication No. 2022/0262078), in view of Bayha et. al (U.S. Patent No. 9,792,594).
Regarding claim 12, Aslam discloses the method of claim 1, wherein: the first communication endpoint is associated with a contact center agent, the second communication endpoint is associated with a customer entity, and the augmented video communication is generated and transmitted in furtherance of a customer support transaction between the contact center agent and the customer entity [Aslam: 0031 “The audio / video teleconferencing system comprises system that enables audio and video communication between remote technician 20 and expert 22 located at headquarters or some other location separate from remote technician 20.”](teaches that expert 22 function’s as the contact center agent and remote technician 20 is the customer entity receiving live support); but fails to explicitly disclose and the object is a representation of account data associated with a service account of the customer entity.
However, Bayha discloses and the object is a representation of account data associated with a service account of the customer entity (Bayha: Col. 4, Lines 35-38 “some or all messages regarding the financial transaction that may usually be displayed on the ATM display screen 114 are instead displayed on the headset display 110.”)(Bayha: Col. 4, Lines 39-40 “financial information such as the user's account balance is displayed on headset display 110.”)(teaches the headsets AR overlay replaces the ATM screen and presents live account-specific data which is associated with the customer’s service account).
Aslam and Bayha are both considered to be analogous to the claimed invention because they are in the same field of augmented-reality customer support systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Aslam to incorporate Bayha’s teachings of presenting customer account data as an augmented object within the live support session. The motivation for such a combination is to provide a more personalized and an efficient support experience resulting in an enhanced customer experience.
Claim 20 is a system claim corresponding to the method claim 12 above. Thus, claim 20 is rejected for the same reason as claim 12.
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED TAHA whose telephone number is (571)272-6805. The examiner can normally be reached 8:30 am - 5 pm, Mon - Fri. Examiner interviews are available via telephone, inperson, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XIAO WU can be reached at (571)272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786- 9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED TAHA/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613