DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on 02/07/2022. These drawings are acceptable.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 7, the limitation “wherein the AR session is established in response to receipt of an unprompted indication of the ML model.” renders the claim indefinite because one of ordinary skill in the art would be unable to ascertain the intended scope of the claim limitation. Specifically, the limitation “wherein the AR session is established in response to receipt of an unprompted indication of the ML model.” appears to have contradictory requirements with respect to the processing element. The processing element deploying/running the AR session receives information as a prompt for taking actions and executing functional instructions. Therefore, it is unclear how a received element causing a response is categorized as the claimed “unprompted indication of the ML model”. One can consider a prompt as a receipt of information causing a response. The limitation as recited appears incoherent and thus considered indefinite.
Regarding claim 17, the limitation is similar to claim 7 and thus rejected under the same rationale.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on the following date/dates: 02/07/2022 has/have been considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Snibbe (US 10755487, hereinafter ‘Snib’) in view of Li et al. (US 20240045851, hereinafter ‘Li’).
Regarding independent claim 1, Snib teaches a computing platform comprising: at least one processor; a communication interface communicatively coupled to the at least one processor; and memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: (in 27:38-51: Processing subsystem 1004 controls the operation of computer system 1000 and may comprise one or more processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGAs). The processors may include single core and/or multicore processors. The processing resources of computer system 1000 may be organized into one or more processing units 1032, 1034, etc. A processing unit may include one or more processors, one or more cores from the same or different processors, a combination of cores and processors, or other combinations of cores and processors. In some embodiments, processing subsystem 1004 may include one or more special purpose co-processors such as graphics processors, digital signal processors (DSPs), or the like…)
select, once an augmented reality (AR) session has been established with an AR client device, a machine learning (ML) model, wherein the ML model is configured to produce an output based on user information;.(16:4-11: As a result of this approach, different users of AR devices 122 may have a different view of the subject 112 when viewing the subject 112 through their AR devices 122 depending on their relationship with the subject [select, once an augmented reality (AR) session has been established with an AR client device, a machine learning (ML) model,]. This gives the subject 112 and other users of the social networking system 250 more control over how they are perceived by their friends, co-workers and other connections having other relationships with the subject [wherein the ML model is configured to produce an output based on user information]…; And in 15:9-30: Each perception profile may comprise a selection of one or more augmented reality (AR) elements [select, once an augmented reality (AR) session has been established with an AR client device, a machine learning (ML) model]. For example, the perception profile 274a for a subject 112 may comprise a visual overlay of a t-shirt, a baseball cap, and a link to the subject's 112 user profile on a social networking system, while the perception profile 274b for the subject 112 may comprise a tie, a suit, and a link to the subject's user profile on a second social networking system. Each perception profile may further comprise an arrangement of the one or more AR elements comprised within the perception profile...
Examiner notes that the AR deployed for performing the disclosed functions is considered a machine learning model as the AR is instantiated using machine learning techniques, as noted in 22:66-23:9: …Using facial recognition and other machine learning techniques [select, once an augmented reality (AR) session has been established with an AR client device, a machine learning (ML) model], the augmented reality device 122 may detect that the subject is smiling more often or speaking in a more friendly tone. The augmented reality device 122 may also use natural language processing techniques [select, once an augmented reality (AR) session has been established with an AR client device, a machine learning (ML) model] to understand the contents of a conversation between the user 110 and the subject 112 and thus be able to recognize when the subject 112 has given verbal consent to connecting with the user 110 and reclassifying the user 110 into another relationship category…)
send, to the AR client device, an AR representation of the ML model, wherein the AR representation of the ML model illustrates, using AR, operation of the ML model; (in 1:66-2:20: In certain embodiments, techniques are provided to allow a person to customize how other people see or hear the person. For example, the techniques may provide a way for the person to customize different perception profiles so that the person may be perceived differently to different people or groups of people. These perception profiles may specify which AR elements are to be presented to people perceiving the person and how those AR elements are to be presented [wherein the AR representation of the ML model illustrates, using AR, operation of the ML model]… In one embodiment, a first user may use an AR device to capture sensory data associated with a scene. Using the sensory data, a second user may be identified in the scene. Identifying the second user may include determining identification information for the second user. In one embodiment, the identification information may include a user profile from a social networking system. Using the identification information of the second user, a perception profile associated with the second user may be identified. The perception profile may specify a selection and/or an arrangement of one or more AR elements for a particular person or a group of people. The perception profile may then be sent to the AR device [send, to the AR client device, an AR representation of the ML model]. )
receive, from the AR client device, consent information indicating whether or not consent is provided to apply the ML model; (in 2:24-34: Upon receiving the perception profile, the AR device may present the one or more AR elements according to the specifications in the perception profile [receive, from the AR client device, consent information indicating whether or not consent is provided to apply the ML model]. Thus, the first user may view the one or more AR elements in conjunction with the first user's view of the second user. In one illustrative example, one set of AR elements may include a suit and a tie, thus giving the second user a more professional appearance. When viewed by the first user, the second user thus appears to be wearing professional attire given the AR elements overlaid onto the view of the second user.)
based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model; (in 16:21-38: As previously described the identity storage 270 may be implemented as a global registry of user accounts that may be accessed by one or more social networking systems 250. Accordingly, the identity storage 270 may be implemented on a separate computer system or server system as the server system 150 that is implementing the social networking system 250 and is communicatively connected to the social networking system 250 and other social networking systems via the communications network 240. Alternatively, the identity storage 270 may also share the same server system 150 with the social networking system 250 and is therefore communicatively connected to the social networking system 250 via one or more communication channels internal to the server system 150. Nevertheless, the identity storage 270 may also be accessible by other social networking systems 250 via the communications network 240. The identity storage 270 [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model] may further be implemented on a distributed and decentralized network such as a blockchain network. )
apply the ML model, wherein applying the ML model produces a ML output for a user of the AR client device; (3:32-37: The audio or video output device may be configured to present the identified one or more AR elements. In some embodiments, the perception profile includes an arrangement of the visual element [apply the ML model, wherein applying the ML model produces a ML output for a user of the AR client device]. In such embodiments, the visual element may be displayed based upon the arrangement having been customized by the second user.)
and send, to the AR client device, one or more commands directing the AR client device to display the ML output, wherein sending the one or more commands directing the AR client device to display the ML output causes the AR client device to display the ML output; (in 9:11-30: The display 126 of the augmented reality device 122 may be used to display one or more augmented reality elements that may be overlaid on top of a user's 110 view of a subject 112. The augmented reality elements that may be presented to a user of an augmented reality device 122 may be directed to various sensory channels, such as visual, auditory, and haptic channels. While the display 126 of an augmented reality device 122 may be used to present visual augmented reality elements, the augmented reality device 122 may further comprise speakers and haptic feedback devices that can provide auditory and haptic augmented reality elements to the user 110. Components for providing auditory and haptic feedback may be comprised within the display 126 or may be separate from the display 126. Examples of augmented reality elements that may be displayed include numbers, text, GPS data, photos, images, visual overlays, icons, maps, webpages, Internet links, music, voice recordings, audio clips, sound effects, haptic feedback, vibrations, and/or any other augmented reality elements that may enhance a user's 110 view 130 of a scene 100.)
and based on identifying that consent has not been received: select an alternative machine learning model, wherein the alternative ML model is configured to produce the output based on the user information, send, to the AR client device, an AR representation of the alternative ML model, wherein the AR representation of the alternative ML model illustrates, using AR, operation of the ML model, (12:57-64: A user of the social networking system 250 may also use the social networking system 250 to connect with other users of the social networking system 250, thereby becoming “connected” with each other. Users who are connected with each other may be referred to as each other's social networking “connections.” Users may choose to grant a greater level of access [and based on identifying that consent has not been received] to the data they share to their connections than to users and non-users they are not connected with….14:26-36: A user's relationship categories may be configured to be viewable only by the user. The user may also configure his or her personal settings [and based on identifying that consent has not been received] to share his or her relationship categories with other users. For example, a user may want to make the relationship category of his or her chess club members publically accessible so that other users may be able to access the same relationship category and see which users belong to the relationship category. The user may further adjust the permissions associated with the relationship category such that other users may add or remove users from the relationship category…; And in 17:4-15: Similarly, a user of one or more social networking systems 250 may traditionally have to manage privacy settings and viewing privileges on each one of the user's social networking systems 250 for different categories of people who may view the users' profiles [and based on identifying that consent has not been received]. In contrast, this approach provides a user with one place for managing all of the user's perception profiles and the viewing permissions and privileges associated with each perception profile. This approach therefore helps the user avoid having to configure a separate set of profiles and viewing privileges for each one of one or more social networking systems 250 that the user may be a member of…; And in 21:52-22:25: … If the user 110 does not belong to any of the subjects' 112 relationship categories [and based on identifying that consent has not been received], the social networking system 250 may identify that a default category is appropriate for the user [select an alternative machine learning model, wherein the alternative ML model is configured to produce the output based on the user information]. In alternative embodiments, machine learning may be applied to the social networking data retrieved for the subject 112 and the user to predict which category the user should belong to [based on identifying that consent has not been received: select an alternative machine learning model, wherein the alternative ML model is configured to produce the output based on the user information,]. Such a prediction may be based on the user and the subject's social networking data… This allows the user 110 to be automatically categorized into one of the subject's 112 relationship categories without requiring explicit input from the subject [based on identifying that consent has not been received: select an alternative machine learning model, wherein the alternative ML model is configured to produce the output based on the user information]. The subject 112 may be able to configure whether such automatic classification features should be active... For example, upon classifying the user 110 as being part of the subject's 112 friend category, the perception profile 374 corresponding to that relationship category may be selected for subsequent display to the user 110 [send, to the AR client device, an AR representation of the alternative ML model, wherein the AR representation of the alternative ML model illustrates, using AR, operation of the ML mode]. At step 614, based on the selected perception profile 374, the selection and arrangement of various augmented reality elements 132 may be extracted from the specification of the perception profile 374. This data may then be sent to the augmented reality device 122. In one embodiment, the augmented reality device 122 comprises a collection of augmented reality elements 132 that may be displayed depending on the specification received from the social networking system 250 regarding the selection and arrangement of the augmented reality elements 132. Alternatively, the augmented reality device 122 may need to retrieve data about augmented reality elements 132 from the social networking system 250 and/or the identity storage 270 via the communications network 240.)
receive, from the AR client device, second consent information indicating that consent is provided to apply the alternative ML model, write, to the distributed ledger, the second consent information and an identifier of the alternative ML model, apply the alternative ML model, wherein applying the alternative ML model produces an alternative ML output customized based on the user of the AR client device, and send, to the AR client device, one or more commands directing the AR client device to display the alternative ML output, wherein sending the one or more commands directing the AR client device to display the alternative ML output causes the AR client device to display the alternative ML output. (in 21:52-22:25: … If the user 110 does not belong to any of the subjects' 112 relationship categories, the social networking system 250 may identify that a default category is appropriate for the user. In alternative embodiments, machine learning may be applied to the social networking data retrieved for the subject 112 and the user to predict which category the user should belong to. Such a prediction may be based on the user and the subject's social networking data… This allows the user 110 to be automatically categorized into one of the subject's 112 relationship categories without requiring explicit input from the subject. The subject 112 may be able to configure whether such automatic classification features should be active... For example, upon classifying the user 110 as being part of the subject's 112 friend category, the perception profile 374 corresponding to that relationship category may be selected for subsequent display to the user 110 [send, to the AR client device, an AR representation of the alternative ML model, wherein the AR representation of the alternative ML model illustrates, using AR, operation of the ML mode]. At step 614, based on the selected perception profile 374, the selection and arrangement of various augmented reality elements 132 may be extracted from the specification of the perception profile 374 [receive, from the AR client device, second consent information indicating that consent is provided to apply the alternative ML model, write, to the distributed ledger, the second consent information and an identifier of the alternative ML model, apply the alternative ML model,]. This data may then be sent to the augmented reality device 122. In one embodiment, the augmented reality device 122 comprises a collection of augmented reality elements 132 that may be displayed depending on the specification received from the social networking system 250 regarding the selection and arrangement of the augmented reality elements 132. Alternatively, the augmented reality device 122 may need to retrieve data about augmented reality elements 132 from the social networking system 250 and/or the identity storage 270 via the communications network 240.))
While Snib teaches the using a blockchain for communicating an storing network data as noted above.
Additionally, Li teaches using a blockchain for communicating an storing network data, in [0196] In various embodiments of any of the method and apparatus, the blockchain storage solution may include a first blockchain for storing a full version of the local model updates corresponding to one or more of the plurality of participant nodes [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model]. In various embodiments of any of the method and apparatus, the blockchain storage solution may include a second blockchain for storing a tailored version of the local model updates corresponding to one or more of the plurality of participant node… [0228] The information indicating whether to store local model updates may be included and/or indicated in the request to indicate whether a local model update of each FL participant is to be and/or should be recorded in the blockchain [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model].…
Snib and Li are analogous art because both involve developing information retrieval and processing techniques using machine learning algorithms in distributed networking environments.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing systems directed to blockchain-enabled model storage, sharing and deployment for supporting machine learning algorithms and augmented reality systems in distributed networking environments, as disclosed by Li with the method of developing information retrieval and processing techniques for automated control of augmented reality technology in distributed networking environment as disclosed by Snib.
One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Snib and Li noted above; Doing so allow modifications to the blockchain by allowing designated authorities to edit, rewrite or remove previous blocks of information without breaking the blockchain, (Li, 0270).
Regarding claim 2, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the AR representation of the ML model is configured to be manipulated based on user input received via the AR client device, wherein the manipulation of the AR representation further illustrates the operation of the ML model. (in 22:60-23:8: …Reclassifying the user 110 may be based on sensory data gathered by the augmented reality device 122 during the course of the user's 110 interaction with the subject 112 [wherein the AR representation of the ML model is configured to be manipulated based on user input received via the AR client device,]. For example, the augmented reality device 122 may analyze the sensory data about the subject 112 to identify that the subject's 112 demeanor towards the user 110 has become more friendly [wherein the AR representation of the ML model is configured to be manipulated based on user input received via the AR client device, wherein the manipulation of the AR representation further illustrates the operation of the ML model]. Using facial recognition and other machine learning techniques, the augmented reality device 122 may detect that the subject is smiling more often or speaking in a more friendly tone. The augmented reality device 122 may also use natural language processing techniques to understand the contents of a conversation between the user 110 and the subject 112 and thus be able to recognize when the subject 112 has given verbal consent to connecting with the user 110 and reclassifying the user 110 into another relationship category…)
Regarding claim 3, the rejection of claim 2 is incorporated and Snib in combination with Li teaches the computing platform of claim 2, wherein the user input comprises selection of the alternative ML model. ((in 22:60-23:8: …Reclassifying the user 110 may be based on sensory data [wherein the user input comprises selection of the alternative ML model as reclassified models] gathered by the augmented reality device 122 during the course of the user's 110 interaction with the subject 112 [wherein the user input comprises selection of the alternative ML model.]. For example, the augmented reality device 122 may analyze the sensory data about the subject 112 to identify that the subject's 112 demeanor towards the user 110 has become more friendly. Using facial recognition and other machine learning techniques, the augmented reality device 122 may detect that the subject is smiling more often or speaking in a more friendly tone. The augmented reality device 122 may also use natural language processing techniques to understand the contents of a conversation between the user 110 and the subject 112 and thus be able to recognize when the subject 112 has given verbal consent to connecting with the user 110 and reclassifying the user 110 into another relationship category…))
Regarding claim 4, the rejection of claim 3 is incorporated and Snib in combination with Li teaches the computing platform of claim 3, wherein the memory stores additional computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: receive an indication of the selection of the alternative ML model; and update the AR representation of the ML model based on the selection of the alternative ML model, wherein updating the AR representation of the ML model configures the AR representation of the ML model to illustrate operation of the alternative ML model. (in 22:60-23:8: Reclassifying the user 110 may be based on sensory data gathered by the augmented reality device 122 during the course of the user's 110 interaction [herein the memory stores additional computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: receive an indication of the selection of the alternative ML model; As the indication provided by the user sensor data modeled during the user interaction] with the subject 112. For example, the augmented reality device 122 may analyze the sensory data about the subject 112 to identify that the subject's 112 demeanor towards the user 110 has become more friendly [and update the AR representation of the ML model based on the selection of the alternative ML model, wherein updating the AR representation of the ML model configures the AR representation of the ML model to illustrate operation of the alternative ML model.]. Using facial recognition and other machine learning techniques, the augmented reality device 122 may detect that the subject is smiling more often or speaking in a more friendly tone. The augmented reality device 122 may also use natural language processing techniques to understand the contents of a conversation between the user 110 and the subject 112 and thus be able to recognize when the subject 112 has given verbal consent to connecting with the user 110 and reclassifying the user 110 into another relationship category [and update the AR representation of the ML model based on the selection of the alternative ML model, wherein updating the AR representation of the ML model configures the AR representation of the ML model to illustrate operation of the alternative ML model.]. )
Regarding claim 5, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the AR client device comprises one or more of: a mobile device, a tablet device, or AR glasses. (in 1:16-25: AR technology may take the form of electronic devices [wherein the AR client device comprises one or more of: a mobile device, a tablet device, or AR glasses], including wearable devices (e.g., smart eyeglasses), mobile devices (e.g., smartphones), tablets, or laptop computers. These AR devices may perform AR functions. For example, a pair of smart eyeglasses may include a transparent display capable of presenting various visual AR elements. When a user wears the smart eyeglasses, the display may be positioned in between the user's eyes and the scene that the user is viewing.)
Regarding claim 6, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the distributed ledger comprises one of: a blockchain or a holo-chain. (in 16:34-38: … Nevertheless, the identity storage 270 may also be accessible by other social networking systems 250 via the communications network 240. The identity storage 270 may further be implemented on a distributed and decentralized network such as a blockchain network.)
Regarding claim 7, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the AR session is established in response to receipt of an unprompted indication of the ML model. (in in 2:10-34 … In one embodiment, a first user may use an AR device to capture sensory data associated with a scene [wherein the AR session is established in response to receipt of an unprompted indication of the ML mode]. Using the sensory data, a second user may be identified in the scene. Identifying the second user may include determining identification information for the second user. In one embodiment, the identification information may include a user profile from a social networking system. Using the identification information of the second user, a perception profile associated with the second user may be identified. The perception profile may specify a selection and/or an arrangement of one or more AR elements for a particular person or a group of people. The perception profile may then be sent to the AR device. Upon receiving the perception profile, the AR device may present the one or more AR elements according to the specifications in the perception profile. Thus, the first user may view the one or more AR elements in conjunction with the first user's view of the second user [wherein the AR session is established in response to receipt of an unprompted indication of the ML model]. In one illustrative example, one set of AR elements may include a suit and a tie, thus giving the second user a more professional appearance. When viewed by the first user, the second user thus appears to be wearing professional attire given the AR elements overlaid onto the view of the second user. Examiner notes that receiving an unprompted indication considered prompting an AR interaction without the need for user classification)
Regarding claim 8, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the AR session is established in response to selection of the ML model by the user of the AR client device. (in 2:10-34: … In one embodiment, a first user may use an AR device to capture sensory data associated with a scene [wherein the AR session is established in response to selection of the ML model by the user of the AR client device]. Using the sensory data, a second user may be identified in the scene. Identifying the second user may include determining identification information for the second user. In one embodiment, the identification information may include a user profile from a social networking system. Using the identification information of the second user, a perception profile associated with the second user may be identified. The perception profile may specify a selection and/or an arrangement of one or more AR elements for a particular person or a group of people [wherein the AR session is established in response to selection of the ML model by the user of the AR client device]. The perception profile may then be sent to the AR device. Upon receiving the perception profile, the AR device may present the one or more AR elements according to the specifications in the perception profile. Thus, the first user may view the one or more AR elements in conjunction with the first user's view of the second user [wherein the AR session is established in response to selection of the ML model by the user of the AR client device.]. In one illustrative example, one set of AR elements may include a suit and a tie, thus giving the second user a more professional appearance. When viewed by the first user, the second user thus appears to be wearing professional attire given the AR elements overlaid onto the view of the second user.)
Regarding claim 9, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein the AR client device comprises one or more edge nodes, and wherein communication between the computing platform and the AR client device occurs via the one or more edge nodes. (in 13:16-30: …The social networking system 250 may store the information in the form of a social graph. The social graph may include nodes representing users, individuals, groups, organizations, or the like. The edges between the nodes [wherein the AR client device comprises one or more edge node] may represent one or more specific types of interdependencies or interactions between the concepts. The social networking system 250 may use this stored information to provide various services (e.g., wall posts, photo sharing, event organization, messaging, games, advertisements, or the like) to its users to facilitate social interaction between users using the social networking system 250 [wherein communication between the computing platform and the AR client device occurs via the one or more edge nodes.]. In one embodiment, if users of the social networking system 250 are represented as nodes in the social graph, the term “friend” or “connection” may refer to an edge formed between and directly connecting two user nodes [wherein the AR client device comprises one or more edge nodes, and wherein communication between the computing platform and the AR client device occurs via the one or more edge nodes]. And in 11:58-12:2: … In one embodiment, the communications network 240 may have a centralized and hierarchical tree structure that connects the social networking system 250 at a root node at the top of the tree structure to a plurality of augmented reality devices 122 [wherein the AR client device comprises one or more edge nodes, and wherein communication between the computing platform and the AR client device occurs via the one or more edge nodes.] at leaf nodes at the bottom of the tree structure. In such a network, each node in the network may be connected to one or more nodes at a lower level in the tree structure, but only one node at a higher level in the tree structure. In another embodiment, the communications network 240 may have a decentralized mesh structure where any node in the network may be connected to one or more other nodes [wherein the AR client device comprises one or more edge nodes, and wherein communication between the computing platform and the AR client device occurs via the one or more edge nodes.]. It is understood that the communication network may be structured or arranged in other ways in alternative embodiments…)
Regarding claim 10, the rejection of claim 1 is incorporated and Snib in combination with Li teaches the computing platform of claim 1, wherein illustrating the operation of the ML model comprises illustrating a preview of the ML output. (in 15:9-30: Each perception profile may comprise a selection of one or more augmented reality (AR) elements. For example, the perception profile 274a for a subject 112 may comprise a visual overlay of a t-shirt, a baseball cap, and a link to the subject's 112 user profile on a social networking system, while the perception profile 274b for the subject 112 may comprise a tie, a suit, and a link to the subject's user profile on a second social networking system. Each perception profile may further comprise an arrangement of the one or more AR elements comprised within the perception profile [wherein illustrating the operation of the ML model comprises illustrating a preview of the ML output]. For example, perception profile 274a may comprise an arrangement wherein the t-shirt is overlaid onto the subject's 112 body, wherein the baseball cap may be overlaid onto the subject's head, and a social networking profile link that may be positioned next to a view of the subject's body [wherein illustrating the operation of the ML model comprises illustrating a preview of the ML output]. Similarly, perception profile 274b may comprise an arrangement of the AR elements wherein the suit is overlaid onto the subject's 112 body, wherein the tie is overlaid around the subject's 112 neck, and the professional networking link may be positioned next to a view of the subject's 112 body when viewed using an AR device 122. )
Regarding independent claims 11 and 20, the limitations are similar with claim 1 and thus rejected under the same rationale.
Regarding claims 12-19, the limitations are similar to does in claims 2-9, and are rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Daly (US 20200193717): teaches using a blockchain for communicating an storing network data, in [0066] The system 103 forms [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model] a smart contract 1102 between the property owner 1003 and the other user 106 when the property owner 1003 grants, denies, or revokes access to the property owner's property 1001. The system 103 saves the smart contract to a blockchain ledger 1101 [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model]. The system 103 forks the blockchain 1101 to form a new smart contract 1102 when a smart contract 1102 needs to be amended. The new smart contract 1102 is forked by the rest of the nodes on the blockchain 1101 as long the smart contract 1102 displays no sign of being tampered with and the property owner 1003 shows the desire to continue to interact with that user 106. The user 106 interacting with the blockchain 1101 thread automatically accepts the fork that was made by that user 106. And in [0017] ... Users 106 of the present invention can collaborate to generate a digital globally-mapped, persistent 3D environment 101. In one embodiment of the invention, users remotely author content 701 to manually or automatically display locally or globally. In another embodiment of the invention, users 106 protect their content 105 through blockchain-encrypted 1101 smart contracts 1102 [based on identifying that consent has been received: write, to a distributed ledger, the consent information and an identifier of the ML model]. In another embodiment, the system 103 (also referenced herein as the game engine) provides security for property owners 1003 through blockchain-encrypted 1101 smart contracts 1102. Another embodiment allows users 106 to filter content 105 displayed to them within the AR environment 101. And in [0043] In one or more embodiments, data permissions may be established when sharing data elements, such that a device may be authorized to share certain data elements (e.g., a partial sharing) with some devices and not with others. For example, in an industrial environment, some device users may be authorized to access all of the data elements, while other device users may only be authorized to access a minimum amount of data elements. In one or more embodiments, the permissions may be recorded in a secure distributed transaction ledger 112 (e.g., blockchain) to enable a seamless sharing between devices in a permission-use environment.
Singh (US 20190334698): teaches in [0031] The data sharing platform 106 may utilize a secure, distributed ledger 112 to verify information received at the resource sharing module 102 and then mark the stored data as “valid” so that it can be safely used by the platform 106. As described further below, the validity in a secure, distributed ledger (e.g., block-chain) may be derived from the consensus.
Guim Bernat et al. (US 20210110310): teaches in [0045] Examples of workloads to be executed in an edge environment include autonomous driving computations, video surveillance monitoring, machine learning model executions, and real time data analytics. Additional examples of workloads include delivering and/or encoding media streams, measuring advertisement impression rates, object detection in media streams, speech analytics, asset and/or inventory management, and augmented reality processing… [0073] In further examples, an edge computing system is extended to provide for orchestration of multiple applications through the use of containers (a contained, deployable unit of software that provides code and needed dependencies) in a multi-owner, multi-tenant environment. A multi-tenant orchestrator may be used to perform key management, trust anchor management, and other security functions related to the provisioning and lifecycle of the trusted ‘slice’ concept in FIG. 4. For instance, an edge computing system may be configured to fulfill requests and responses for various client endpoints from multiple virtual edge instances (and, from a cloud or remote data center). The use of these virtual edge instances may support multiple tenants and multiple applications (e.g., augmented reality (AR)/virtual reality (VR), enterprise applications, content delivery, gaming, compute offload) simultaneously….
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129