Prosecution Insights
Last updated: April 19, 2026
Application No. 18/795,814

GENERATING A BACKGROUND THAT ALLOWS A FIRST AVATAR TO TAKE PART IN AN ACTIVITY WITH A SECOND AVATAR

Non-Final OA §103§DP
Filed
Aug 06, 2024
Examiner
MCDOWELL, JR, MAURICE L
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Implementation Apps LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
790 granted / 913 resolved
+24.5% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
936
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
47.7%
+7.7% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 913 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement NPL entries: 2-3, 9-13, 15-17 and 23-31 are missing dates (at least the year is needed) and have not been considered by the examiner. Note: the retrieved or found dates don’t count as a date that can be used to determine prior art. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 22-41 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,094,045 in view of Nelson (US2011/0292051A1 FROM IDS OF 8/6/24). Regarding claims, 22 and 32 the patent teaches all of the limitations, but doesn’t teach however, the analogous prior art NELSON teaches: the device is one of a personal computer, a tablet computer and a mobile phone of the user (NELSON: see par. 45). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the device is one of a personal computer, a tablet computer and a mobile phone of the user as shown in NELSON with the patent for the benefit of facilitating for the individual the ability to manually select the avatar with the closest resemblance (e.g., using touch input with a finger or stylus) [NELSON: 45]. Claims 22, 28-29 and 31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 and 5-6 of U.S. Patent No. 11,670,033 in view of NIMS (US2023/0133976A1) in view of NELSON (US2011/0292051A1 FROM IDS OF 8/6/24). Regarding claim 22, the patent teaches most of the limitations but doesn’t teach, however the analogous prior art NIMS teaches: the background comprises a first element (NIMS: fig. 3 see par. 43 lines 1-7; note: animated background is being interpreted as the first element). the background comprises a second element, different than the first element (NIMS: fig. 3 see par. 41 lines 4-15). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the background comprises a first element and the background comprises a second element, different than the first element as shown in NIMS with the patent for the benefit of allowing users to maintain their anonymity while participating in and interacting with the social networking service and members thereof [NIMS: 3]. The previous combination of the patent and NIMS don’t teach, however the analogous prior art NELSON teaches: the device is one of a personal computer, a tablet computer and a mobile phone of the user (NELSON: see par. 45). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the device is one of a personal computer, a tablet computer and a mobile phone of the user as shown in NELSON with the previous combination for the benefit of facilitating for the individual the ability to manually select the avatar with the closest resemblance (e.g., using touch input with a finger or stylus) [NELSON: 45]. Instant claims Patent 12,094,045 claims 22. A method comprising: in a device: receiving input from a user; generating a first avatar depicting the user according to the input from the user; adjusting appearance aspects and motion aspects of the first avatar according to the input from the user; generating a second avatar; and generating a background for the first avatar and the second avatar, wherein: the background comprises a first element that enables the first avatar to take part in an activity with the second avatar, the background comprises a second element, different than the first element, that is selected from a plurality of images according to the input from the user, the plurality of images are associated with a theme, and the device is one of a personal computer, a tablet computer and a mobile phone of the user. 23. The method of claim 22, wherein the method comprises: in the device: configuring a movement of one or more reference points in the first avatar via a motion template. 24. The method of claim 23, wherein the movement is described according to adjacent video frames in a sequence of video frames provided by the user. 25. The method of claim 23, wherein the movement is described according to a time ordered list of distance vectors. 26. The method of claim 23, wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time. 27. The method of claim 23, wherein the movement is in three dimensions. 28. The method of claim 22, wherein the second avatar depicts a public personality. 29. The method of claim 22, wherein the activity is a show. 30. The method of claim 22, wherein the activity comprises the first avatar talking with the second avatar. 31. The method of claim 22, wherein the input from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance. 32. A system comprising: a device operable to: receive input from a user; generate a first avatar depicting the user according to the input from the user; adjust appearance aspects and motion aspects of the first avatar according to the input from the user; generate a second avatar; and generate a background for the first avatar and the second avatar, wherein: the background comprises a first element that enables the first avatar to take part in an activity with the second avatar, the background comprises a second element, different than the first element, that is selected from a plurality of images according to the input from the user, the plurality of images are associated with a theme, and the device is one of a personal computer, a tablet computer and a mobile phone of the user. 33. The system of claim 32, wherein the device is operable to: configure a movement of one or more reference points in the first avatar via a motion template. 34. The system of claim 33, wherein the movement is described according to adjacent video frames in a sequence of video frames provided by the user. 35. The system of claim 33, wherein the movement is described according to a time ordered list of distance vectors. 36. The system of claim 33, wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time. 37. The system of claim 33, wherein the movement is in three dimensions. 38. The system of claim 32, wherein the second avatar depicts a public personality. 39. The system of claim 32, wherein the activity is a show. 40. The system of claim 32, wherein the activity comprises the first avatar talking with the second avatar. 41. The system of claim 32, wherein the input from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance. 1. A method comprising: in connection with social media: the video content is specifically generated on a personal computer or a mobile device of the user, generating a first avatar depicting a user according to input received from the user; adjusting one or both of appearance aspects and motion aspects of the first avatar according to the input from the user, wherein the appearance aspects of the first avatar are depicted according to a mood of the user; generating a second avatar; and generating a background for the first avatar and the second avatar, wherein: the background comprises video content that allows the user to choose to have the first avatar take part in an activity with the second avatar, the background comprises one or more images, other than the video content, selected from a plurality of images according to the input from the user, and the plurality of images are associated with a theme generated via the social media. the video content is specifically generated on a personal computer or a mobile device of the user. 6. The method according to claim 1, comprising configuring, by the motion template, a movement of one or more reference points in the first avatar. 7. The method according to claim 6, wherein the movement is described according to adjacent video frames in a sequence of video frames provided by the user. 8. The method according to claim 6, wherein the movement is described according to a time ordered list of distance vectors. 9. The method according to claim 6, wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time. 10. The method according to claim 6, wherein the movement is in three dimensions. 2. The method according to claim 1, wherein the second avatar depicts a public personality. 3. The method according to claim 1, wherein the activity is a show. 4. The method according to claim 1, wherein the activity comprises the first avatar talking with the second avatar. 5. The method according to claim 1, wherein the input received from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance. 11. A system comprising: one or more processors for communicatively coupling to a communication network and operably coupled to a display device, the one or more processors operable to: monitor information from a user of social media; generate, according to the information from the user, a first avatar depicting the user; adjust one or both of appearance aspects and motion aspects of the first avatar according to the information from the user, generate a background for the first avatar and a second avatar, generate a background for the first avatar and a second avatar, wherein the appearance aspects of the first avatar are depicted according to a mood of the user; and generate a background for the first avatar and a second avatar, wherein: the background comprises video content that allows the user to choose to have the first avatar take part in an activity with the second avatar, the video content is specifically generated on a personal computer or a mobile device of the user, the background comprises one or more images, other than the video content, selected from a plurality of images according to the input from the user, and the plurality of images are associated with a theme generated via the social media. 16. The system according to claim 11, wherein the motion template is operable to configure a movement of one or more reference points in the first avatar. 17. The system according to claim 16, wherein the movement is described according to adjacent video frames in a sequence of video frames provided by the user. 18. The system according to claim 16, wherein the movement is described according to a time ordered list of distance vectors. 19. The system according to claim 16, wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time. 20. The system according to claim 16, wherein the movement is in three dimensions. 13. The system according to claim 11, wherein the second avatar depicts a public personality. 14. The system according to claim 11, wherein the activity is a show. 15. The system according to claim 11, wherein the activity comprises the first avatar talking with the second avatar. 12. The system according to claim 11, wherein the information from the user comprise one or more of a subject matter, a level of importance, a timeliness, and a classification of geographic relevance. Instant claims 22. A method comprising: in a device: receiving input from a user; generating a first avatar depicting the user according to the input from the user; adjusting appearance aspects and motion aspects of the first avatar according to the input from the user; generating a second avatar; and generating a background for the first avatar and the second avatar, wherein: the background comprises a first element that enables the first avatar to take part in an activity with the second avatar, the background comprises a second element, different than the first element, that is selected from a plurality of images according to the input from the user, the plurality of images are associated with a theme, and the device is one of a personal computer, a tablet computer and a mobile phone of the user. 28. The method of claim 22, wherein the second avatar depicts a public personality. 29. The method of claim 22, wherein the activity is a show. 31. The method of claim 22, wherein the input from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance. Patent 11,670,033 claims 1. A method comprising: in a social media website: generating a first avatar depicting a first user according to input received from the first user; 6. The method according to claim 1, wherein the method comprises adjusting one or both of appearance aspects and motion aspects of the first avatar according to the input from the first user. 1. generating a second avatar; and generating a background for the first avatar and the second avatar, wherein: the background allows the first avatar to take part in an activity with the second avatar, and the background comprises one or more images selected from a plurality of images according to the input from the first user and according to one or more characteristics, the plurality of images are associated with a theme according to information received by the social media website. 2. The method according to claim 1, wherein the second avatar depicts a public personality. 3. The method according to claim 1, wherein the background comprises video content and the activity is a show. 5. The method according to claim 1, wherein the characteristics comprise one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance. Claim Rejections - 35 USC § 103 The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 22 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS US2023/0133976 A1 in view of PEREZ US2011/0007142 A1 in view of VASALOU et. al., “Me, myself and I: The role of interactional context on self-presentation through avatars” Computers in human behavior 25.2 (2009): 510-520. Regarding claim 22, NIMS teaches: 22. A method comprising (NIMS: par. 19): in a device (NIMS: fig. 1: 110-130 see par. 21 lines 1-4; par. 22 lines 1-7; par. 25 lines 1-4 ): receiving input from a user (NIMS: par. 34 lines 1-3 and 8-11); generating a first avatar depicting the user according to the input from the user (NIMS: par. 34 lines 8-14); generating a second avatar (NIMS: fig. 3, 301 (a-b) see par. 41); and generating a background for the first avatar and the second avatar, wherein (NIMS: fig. 3, 301 (a-b) see par. 41): the background comprises a first element that enables the first avatar to take part in an activity with the second avatar (NIMS: fig. 3 see par. 43 lines 1-7; note: animated background is interpreted as first element), the device is one of a personal computer, a tablet computer and a mobile phone of the user (NIMS: fig. 1: 110-130 see par. 21 lines 1-4; par. 22 lines 1-7). NIMS doesn’t teach however the analogous prior art PEREZ teaches: adjusting appearance aspects and motion aspects of the first avatar according to the input from the user (PEREZ: fig. 1 see par. 18 lines 1-11; par. 24 lines 1-6 and 16-17). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine adjusting appearance aspects and motion aspects of the first avatar according to the input from the user as shown in PEREZ with NIMS for the benefit of addressing a shortcoming in the prior art in that many computing applications such as computer games, multimedia applications, office applications, or the like provide a selection of predefined animated characters that may be selected for use in the application as the user's avatar. Some systems may incorporate a camera that has the ability to take a picture of a user and identify features from that frame of data. However, these systems require a capture of a user's feature, processing of the image, and then application to the character in a non-real time environment, and the features applied are low fidelity, usually based on a single snapshot of the user [1]. NIMS in view of PEREZ don’t teach, however the analogous prior art VASALOU teaches: the background comprises a second element, different than the first element, that is selected from a plurality of images according to the input from the user (VASALOU: fig. 2, pg. 515, see also sub-section 5.3.1, pg. 514 lines 1-14), the plurality of images are associated with a theme (VASALOU: fig. 2, pg. 515, see also sub-section 5.3.1, pg. 514 lines 1-14; note: the cabin, gaming and London are considered as themes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the background comprises a second element, different than the first element, that is selected from a plurality of images according to the input from the user, the plurality of images are associated with a theme as shown in VASALOU with the previous combination for the benefit of fulfilling a shortcoming in the prior art in that if designers are mindful of the customization options preferred by users of a specific domain, these options can be highlighted and made easily accessible in the avatar customization interface [VASALOU pg. 511 lines 1-4]. Claim 32 is analogous to claim 22 and is therefore rejected using the same rationale. Claim 32 further requires a different preamble, also taught by NIMS: A system comprising (NIMS: par. 19). Claim(s) 23-24 and 33-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of HYUN KR-101270151-B1. Regarding claim 23, the previous combination of NIMS in view of PEREZ in view of VASALOU don’t teach however the analogous prior art HYUN teaches: 23. The method of claim 22, wherein the method comprises: in the device: configuring a movement of one or more reference points in the first avatar via a motion template (HYUN: pg. 4 line 45 - pg. 5 line 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine configuring a movement of one or more reference points in the first avatar via a motion template as shown in HYUN with the previous combination for the benefit of allowing a user to produce a dance movement of an avatar [HYUN: pg. 4 line 45]. Regarding claim 24, NIMS in view of PEREZ in view of VASALOU as modified by HYUN (with the same motivation from claim 23) further teaches: 24. The method of claim 23, wherein the movement is described according to adjacent video frames in a sequence of video frames provided by the user (HYUN: pg. 2 lines 3-5). Claim 33 is analogous to claim 23 and is therefore rejected using the same rationale. Claim 34 is analogous to claim 24 and is therefore rejected using the same rationale. Claim(s) 25 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of HYUN in view of HONDA JP-3859020-B2 in view of FITZGIBBON US2011/0228976 A1. Regarding claim 25, the previous combination of NIMS in view of PEREZ in view of VASALOU in view of HYUN don’t teach however the analogous prior art HONDA teaches: 25. The method of claim 23, wherein the movement is described according to a time ordered list of vectors (HONDA: par. 12). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the movement is described according to a time ordered list of vectors as shown in HONDA with the previous combination for the benefit of addressing a shortcoming in the prior art in that when moving an object having a complicated shape composed of a plurality of sub-objects, the identification number of each sub-object and the change information must be transmitted / received separately, which increases the amount of communication via the LAN 4. The transmission of the individual change information causes a delay, and as a result, the movement of the object displayed on each of the client terminals 3-1 to 3-3 is not consistent, or the movement of the plurality of sub-objects is delayed. There is a problem that a series of cooperative operations as a whole cannot be displayed smoothly [HONDA par. 11]. The previous combination of NIMS in view of PEREZ in view of VASALOU in view of HYUN in view of HONDA remains as above but doesn’t teach however the analogous prior art FITZGIBBON teaches: the vectors are distance vectors (FITZGIBBON: par. 98). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the vectors are distance vectors as shown in FITZGIBBON with the previous combination for the benefit of fulfilling a need in the prior art for facilitating the development of a body joint tracking system by providing training data in the form of synthesized images [FITZGIBBON par. 1]. Claim 35 is analogous to claim 25 and is therefore rejected using the same rationale. Claim(s) 26-27 and 36-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of HYUN in view of PEARSON US2009/0079743 A1. Regarding claim 26, the previous combination of NIMS in view of PEREZ in view of VASALOU in view of HYUN don’t teach however the analogous prior art PEARSON teaches: 26. The method of claim 23, wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time (PEARSON: par. 39). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the movement is described according to a mathematical formula that determines a position, of the one or more reference points, over time as shown in PEARSON with the previous combination for the benefit of providing an approach that greatly simplifies the task of enabling a number of different 3D animations for avatars or other graphic objects within a browser program or other environment, where each graphic object or avatar can have many different appearances. Further, it would be desirable to provide a higher quality and more realistic 3D appearance for avatars animated within a 2D display of a virtual environment or in an online game accessed within a browser program or other type of environment with limited capability for displaying animations. The same approach should also be useful in displaying other types of graphic objects that represent similar problems due to the variety of display options and number of animations of the graphic objects that are available [PEARSON par. 5]. Regarding claim 27, NIMS in view of PEREZ in view of VASALOU in view of HYUN as modified by PEARSON (with the same motivation from claim 26) further teaches: 27. The method of claim 23, wherein the movement is in three dimensions. (PEARSON: par. 39). Claim 36 is analogous to claim 26 and is therefore rejected using the same rationale. Claim 37 is analogous to claim 27 and is therefore rejected using the same rationale. Claim(s) 28-29 and 38-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of EGOZY US2007/0245881 A1. Regarding claim 28, the previous combination of NIMS in view of PEREZ in view of VASALOU don’t teach however the analogous prior art EGOZY teaches: 28. The method of claim 22, wherein the second avatar depicts a public personality (EGOZY: fig. 2: 210, 230, 250 and 270 see par. 19 (lines 12-15)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the second avatar depicts a public personality as shown in EGOZY with the previous combination for the benefit of fulfilling a need in the prior art for a system and method that enable players of a rhythm-action game to compete or cooperate over a network, such as the Internet, that has unpredictable or large latency [EGOZY: 6]. Regarding claim 29, NIMS in view of PEREZ in view of VASALOU as modified by EGOZY (with the same motivation from claim 28) further teaches: 29. The method of claim 22, wherein the activity is a show (EGOZY: fig. 5A see par. 38). Claim 38 is analogous to claim 28 and is therefore rejected using the same rationale. Claim 39 is analogous to claim 29 and is therefore rejected using the same rationale. Claim(s) 30 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of MANN US20130266927 A1. Regarding claim 30, the previous combination of NIMS in view of PEREZ in view of VASALOU don’t teach however the analogous prior art MANN teaches: 30. The method of claim 22, wherein the activity comprises the first avatar talking with the second avatar (MANN: fig. 1, 108 see pars. 4, 7 and 44). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the activity comprises the first avatar talking with the second avatar as shown in MANN with the previous combination for the benefit of providing a virtual world that allows players to take on roles and pursue interests that they might not be comfortable pursuing in real life. Such worlds have great benefits for their participants, but can also financially benefit their operators and others (e.g., advertisers of makers of virtual goods and services) who interact with the worlds [MANN: 3]. Claim 40 is analogous to claim 30 and is therefore rejected using the same rationale. Claim(s) 31 and 41 is/are rejected under 35 U.S.C. 103 as being unpatentable over NIMS in view of PEREZ in view of VASALOU in view of MAKOFSKY US2014/0229850 A1. Regarding claim 31, the previous combination of NIMS in view of PEREZ in view of VASALOU don’t teach however the analogous prior art MAKOFSKY teaches: 31. The method of claim 22, wherein the input from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance (MAKOFSKY: fig. 1, 110 see par. 25). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the input received from the user comprises one or more of a subject matter, a level of importance, a timeliness and a classification of geographic relevance as shown in MAKOFSKY with the previous combination for the benefit of so that users of the virtual environment may enjoy avatars having names or visual appearances that are more meaningful or personally relevant, thus improving user enjoyment of the virtual environment [MAKOFSKY: 15]. Claim 41 is analogous to claim 31 and is therefore rejected using the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Thur & Sat: 2pm-10pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Aug 06, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602875
TECHNIQUE FOR THREE DIMENSIONAL (3D) HUMAN MODEL PARSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602887
AUGMENTED REALITY CONTROL SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12598281
CONTROL APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM FOR DETERMINING A CAMERA PATH INDICATING A MOVEMENT PATH OF A VIRTUAL VIEWPOINT IN A THREE-DIMENSIONAL SPACE
2y 5m to grant Granted Apr 07, 2026
Patent 12579741
DETECTING THREE DIMENSIONAL (3D) CHANGES BASED ON MULTI-VIEWPOINT IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12561905
Optimizing Generative Machine-Learned Models for Subject-Driven Text-to-3D Generation
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+12.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 913 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month