Prosecution Insights
Last updated: April 19, 2026
Application No. 18/790,566

IMAGE-BASED DEVICE CUSTOMIZATION FOR MULTIPLE USERS

Non-Final OA §103
Filed
Jul 31, 2024
Examiner
HUYNH, THANG GIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Motorola Mobility LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
73.9%
+33.9% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2 and 6-17 are rejected under 35 U.S.C. 103 as being unpatentable over Hunsmann et al. (US 20230129243 A1) (Hereinafter referred to as Hunsmann) in view of Lin et al. (US 20210272253 A1) (Hereinafter referred to as Lin). Regarding Claim 1, Hunsmann discloses A first electronic device, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the electronic device to: (See [0074], “The machine 1500 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC) . . .” Also see [0075], “The machine 1500 may include processors 1502, memory 1504, and . . .”) generate, for a first user, a customization for the first user based at least in part on an appearance of the first user in the image; (See [0006], “A system is disclosed that analyzes a digital image of a person's face (e.g., from a selfie) for color and facial features or characteristics to derive a set of colors and patterns that complement and enhance the person's look when applied to clothing products worn by the person.” In this case, a set of colors and patterns that complement and enhance the person's look would correspond to “a customization for the first user based at least in part on an appearance of the first user in the image”.) However, Hunsmann fails to explicitly disclose obtain an image including multiple users; detect two or more users of the multiple users that are included in the image; generate, for a first user of the two or more users, a customization for the first user based at least in part on an appearance of the first user in the image; generate, for a second user of the two or more users, a customization for the second user based at least in part on an appearance of the second user in the image; communicate an indication of the customization for the second user to a second electronic device associated with the second user. Lin teaches obtain an image including multiple users; (See [0038], “Further, in some implementations, the act 102 can include the image merging system identifying faces of each of the people in the received images.” Also see Fig. 2 showing images with multiple users.) detect two or more users of the multiple users that are included in the image; (See [0047], “For instance, the face detection model 204 is a face detection neural network trained to identify and isolate faces of persons within each of the images.”) generate, for a first user of the two or more users, a customization for the first user based at least in part on an appearance of the first user in the image; (See [0065], “For example, the image merging system analyzes the facial features of each detected face and generates a face descriptor based on the result of the analysis.” In combination with Hunsmann [0006] teaching analyzing a person face to derive a set of colors and patterns (generating a customization for a user), and Lin teaches analyzing each face in the image with images being able to have more than one user, the above limitation is taught.) generate, for a second user of the two or more users, a customization for the second user based at least in part on an appearance of the second user in the image; (See Lin [0065] teaching analyzing each face in an image. See Hunsmann [0006] teaching analyzing a person face to derive a set of colors and patterns. In this case, the combination of Lin teaching multiple faces and Hunsmann teaching generating a set of colors and patterns based on analyzing a face would resulting in the limitation of generating a customization for a second user when there are a first and second user in an image.) communicate an indication of the customization for the second user to a second electronic device associated with the second user. (See [0166], “In various implementations, the series of acts 1400 includes the act of providing the merged first image with the first person and the second person to a client device associated with a user.” In one common scenario, the “client device associated with a user” could be a second electronic device associated with the second user. In combination with Hunsmann, instead of providing a merged image, one would send the customization. Note that simply sending the customization would imply sending “an indication of the customization”.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hunsmann with Lin to include obtaining an image with multiple users and analyzing each face for generation purposes and then providing that to client device associated with one of the users. The motivation to combine Hunsmann with Lin would have been obvious as both are within the same field of image processing as well as and analyzing the faces of the people in the image (See Lin Abstract). The benefit of being able to analyze multiple faces in a single image is that it allows for a quicker processing and broader applicability of the invention. In the case of Hunsmann in view of Lin, instead of needing different selfies for a first and second user, one can simply input an image with both present to generate a customization for each face. Note that adapting Hunsmann to the teachings of Lin of applying facial analysis on more than one face would be a trivial implementation for someone ordinarily skilled in the art. Regarding Claim 2, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the customization for the first user in the image is different than the customization for the second user in the image. (See Hunsmann [0006] teaching analyzing a person face to derive a set of colors and patterns. Also see Lin [0065] teaching analyzing each face in an image (first and second user). Note that obviously, for different users having different appearances, that the generated color and pattern would be different from one another. The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 6, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the first electronic device further comprises an image capture system, and the at least one processor is further configured to cause the first electronic device to obtain the image by capturing the image with the image capture system. (See Hunsmann [0032], “Different smart phone cameras resulting in different picture qualities”.) Regarding Claim 7, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the at least one processor is further configured to cause the first electronic device to communicate an indication of the customization for the first user to an electronic device associated with the first user. (See Hunsmann [0006] teaching analyzing a person’s face to derive a set of colors and patterns (customization). See Lin [0166], “In various implementations, the series of acts 1400 includes the act of providing the merged first image with the first person and the second person to a client device associated with a user.” The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 8, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the two or more users are less than all of the multiple users included in the image. (See Lin Fig. 2 showing the image having at least three users. The motivation to combine would have been similar to that of Claim 1 rejection motivation.) Regarding Claim 9, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the at least one processor is further configured to cause the first electronic device to display a prompt for a user of the electronic device to select whether to have the indication of the customization for the second user communicated to the second electronic device associated with the second user. (See Lin [0166] providing the generated content a client device associated with a user. Also see Lin [0126], “In alternative implementations, the image merging system provides a link to the merged image or a downloadable copy of the merged image to the user where the user can access, save, and/or share the merged image.” In this case, the process of creating a link to share the generated content can be considered as “to display a prompt for a user of the electronic device to select whether to have the indication of the customization for the second user communicated”. In combination with Hunsmann, instead of providing a link to a generated merged image, one would provide a link to the generated customization. The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 10, Hunsmann in view of Lin disclose The first electronic device of claim 1, wherein the two or more users of the multiple users are associated with a user of the first electronic device. (See Lin [0075], “Additionally, or in the alternative, the image merging system communicates with a social networking system associated with the user to identify relationships (e.g., based on matching faces on the social networking site with those detected in the images).” Here, Lin teaches the ability to determine the relationship between the users and thus implying, in a common scenario, the users are associated with each other. The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 11, Hunsmann in view of Lin disclose The first electronic device of claim 10, wherein the two or more users are associated with the user of the first electronic device by being included in a contacts list of the user of the first electronic device or being part of a same collaborative space as the user of the first electronic device. (See Lin [0075], “Additionally, or in the alternative, the image merging system communicates with a social networking system associated with the user to identify relationships (e.g., based on matching faces on the social networking site with those detected in the images).” Note that users associated with a social networking system can be considered as being included in “a contacts list of the user of the first electronic device or being part of a same collaborative space as the user of the first electronic device”. The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 12, Hunsmann in view of Lin disclose A method, comprising: (See Hunsmann Abstract, “Various embodiments described herein provide techniques for analyzing an image (e.g., a selfie) of a user with one or more pre-trained machine learned models, to detect facial characteristics of the user.”) obtaining an image including multiple users; detecting two or more users of the multiple users that are included in the image; generating, for a first user of the two or more users, a customization for the first user based at least in part on an appearance of the first user in the image; generating, for a second user of the two or more users, a customization for the second user based at least in part on an appearance of the second user in the image; and communicating an indication of the customization for the second user to an electronic device associated with the second user. (The above limitations are similar to those of Claim 1 and is therefore rejected under a similar rationale as that of Claim 1.) Regarding Claim 13, Claim 13 is similar to that of Claim 7 and is therefore rejected under a similar rationale as that of Claim 7. Regarding Claim 14, Claim 14 is similar to that of Claim 8 and is therefore rejected under a similar rationale as that of Claim 8. Regarding Claim 15, Claim 15 is similar to that of Claim 9 and is therefore rejected under a similar rationale as that of Claim 9. Regarding Claim 16, Claim 16 is similar to that of Claim 10 and is therefore rejected under a similar rationale as that of Claim 10. Regarding Claim 17, Hunsmann in view of Lin disclose A system, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the system to: (See Hunsmann [0006], “A system is disclosed that analyzes a digital image of a person's face (e.g., from a selfie) for color and facial features or characteristics to derive a set of colors and patterns”. Also see Hunsmann [0075], “The machine 1500 may include processors 1502, memory 1504, and . . .”) capture an image including multiple users; detect two or more users of the multiple users that are included in the image; generate, for a first user of the two or more users, a customization for the first user based at least in part on an appearance of the first user in the image; generate, for a second user of the two or more users, a customization for the second user based at least in part on an appearance of the second user in the image; communicate an indication of the customization for the second user to an electronic device associated with the second user. (The above limitations are similar to those of Claim 1 and is therefore rejected under a similar rationale as that of Claim 1.) Claims 3-5 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hunsmann in view of Lin and in further view of Gan et al. (US 20160048296 A1) (Hereinafter referred to as Gan). Regarding Claim 3, Hunsmann in view of Lin fail to explicitly disclose The first electronic device of claim 1, wherein the appearance of the first user in the image includes apparel worn by the first user in the image, and the appearance of the second user in the image includes apparel worn by the second user in the image. Gan teaches wherein the appearance of the first user in the image includes apparel worn by the first user in the image, and the appearance of the second user in the image includes apparel worn by the second user in the image. (See [0016], “At block 404, the companion device 200 acquires an image of the user 300 using the camera 212, either automatically or in response to user input (e.g., after prompting the user to take a self-portrait). At block 406, the companion device 200 analyzes the image to determine attributes of the image, such as the color of the clothing being worn by the user 300.” In combination with Lin [0065] teaching analyzing each face in an image (first and second user), and Gan teaching to also analyze the clothing worn by the user, the above limitation is taught.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hunsmann in view of Lin with Gan to include analyzing the apparel worn by the user. The motivation to combine Hunsmann in view of Lin with Gan would have been obvious as Gan is within the same field of analyzes the image of users (See Abstract). The benefit of applying the analysis to the clothing as well is that it can give more data on the appearance of the user. Clothing and apparel can be a good descriptor of the person and thus generating a customization by including the apparel would, in theory, create a more personalized customization generation. Regarding Claim 4, Hunsmann in view of Lin and Gan disclose The first electronic device of claim 1, wherein first electronic device further comprises a display, the first user in the image is a user of the first electronic device, and (See Hunsmann [0074], “The machine 1500 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC) . . .” Note that computer is commonly known to be a electronic device comprises a display. Also see Hunsmann [0006], “A system is disclosed that analyzes a digital image of a person’s face (e.g., from a selfie)”) the at least one processor is further configured to cause the first electronic device to customize the display of the first electronic device based at least in part on the customization for the first user in the image. (See Hunsmann [0006] teaching the customization. See Gan [0018], “At block 412, the wearable device 100 implements a display theme based on the analysis conducted at block 410. . . the display theme is based on a predefined color theme that corresponds to a detected color palette of the user’s clothing determined from the image.” The motivation to combine would have been similar to that of Claim 3 rejection motivation.) Regarding Claim 5, Hunsmann in view of Lin and Gan disclose The first electronic device of claim 4, wherein the at least one processor is further configured to cause the first electronic device to: identify a color within apparel worn by the first user in the image; and (See Gan [0018], “At block 412, the wearable device 100 implements a display theme based on the analysis conducted at block 410. . . the display theme is based on a predefined color theme that corresponds to a detected color palette of the user’s clothing determined from the image.”) render a background image on the display that includes a similar color to the identified color within the apparel. (See Gan [0018], “At block 412, the wearable device 100 implements a display theme based on the analysis conducted at block 410. . . the display theme is based on a predefined color theme that corresponds to a detected color palette of the user’s clothing determined from the image.” Note that a display theme would correspond to a background image on the display.) Regarding Claim 18, Claim 18 is similar to that of Claim 3 and is therefore rejected under a similar rationale as that of Claim 3. Regarding Claim 19, Claim 19 is similar to that of Claim 4 and is therefore rejected under a similar rationale as that of Claim 4. Regarding Claim 20, Claim 20 is similar to that of Claim 5 and is therefore rejected under a similar rationale as that of Claim 5. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG G HUYNH whose telephone number is (571)272-5432. The examiner can normally be reached Mon-Thu 7:30am-4:30pm EST | Fri 7:30am-11:30am EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.G.H./Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597100
DEEP IMAGE DELIGHTING
2y 5m to grant Granted Apr 07, 2026
Patent 12586309
MACHINE-LEARNING METHOD ON VECTORIZED THREE-DIMENSIONAL MODEL AND LEARNING SYSTEM THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12581083
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR COMPRESSING TWO-DIMENSIONAL IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12560450
METHOD AND SERVER FOR GENERATING SPATIAL MAP
2y 5m to grant Granted Feb 24, 2026
Patent 12554815
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR AUTHORIZING A SECURE OPERATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+50.0%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month