DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 5/30/2025 has been entered.
Status of Claims
Clams 1-4, 6-7 and 9-20 pending.
Claims 1-2, 13-14, and 20 amended.
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. Claims 1-4, 6-7 and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hildreth (US 2009/0138805), hereinafter referred to as Hildreth, in view of Felkai (US 2013/0346075), hereinafter referred to as Felkai, in view of Bratton (US 2012/0063743), hereinafter referred to as Bratton.
8. Regarding claim 1, Hildreth discloses an effect display method, comprising: in response to triggering of an effect by a current user, acquiring a custom asset file corresponding to the effect, wherein the effect is configured to display at least one frame of effect image (fig. 5, paragraphs 48-49 and 54 wherein third-party device may communicate with any type of media device and be allowed to control the media settings of any type of media device, and wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits);
acquiring association relationship information from a server based on the association relationship list, wherein the association relationship information indicates a user identifier of an associated user having an association relationship with the current user (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
However, Hildreth is silent in regards to disclosing generating a corresponding effect image based on the association relationship information.
Felkai discloses generating a corresponding effect image based on the association relationship information (fig. 4-5, paragraph 66 wherein an animation of the remote user may be superimposed, e.g., by computing device, over the media content based on the received visual data). Felkai (paragraph 26) provides motivation to combine the references wherein the one or more superimposed animations may be rendered by computing device based on visual data received from the remote computing devices.
However, Hildreth and Felkai are silent in regards to disclosing loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect.
Bratton discloses loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect (paragraph 45 wherein each video asset is preferably a separate date file from assets).
Bratton provides motivation to combine the references wherein the composite video includes at least a video asset and a non-video asset in separate files (paragraph 11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hildreth and Felkai, with the teachings of Bratton (paragraph 11).
9. Regarding claim 2, Hildreth discloses the method according to claim 1, wherein loading the custom asset file through the resource reference interface of the effect to obtain the association relationship list corresponding to the effect, comprises: loading the custom asset file from the project file corresponding to the effect through the resource reference interface of the effect, so as to obtain at least one association relationship resource object and an image model corresponding to each of the at least one association relationship resource object, wherein the association relationship resource object is configured to provide the association relationship information for the effect image corresponding to the image model (fig. 1-3, paragraphs 40-43 and 73 wherein system identifies viewer and provides selectable content favored by the identified viewer for selection);
and obtaining the association relationship list corresponding to the effect according to the association relationship resource object and the corresponding image model (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
10. Regarding claim 3, Hildreth discloses the method according to claim 2, wherein obtaining the association relationship list corresponding to the effect according to the association relationship resource object and the corresponding image model, comprises: in response to the image model being a single-frame image, accessing a first attribute of the association relationship resource object to obtain first relationship data indicating the associated user, wherein the first relationship data comprises a first identifier and a corresponding second identifier, the first identifier represents an associated user name of the associated user, and the second identifier represents an associated user avatar of the associated user (fig. 1-3, paragraphs 49 and 52 wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits, and wherein system may be configured to display images such as an avatar);
and generating the association relationship list corresponding to the target effect according to the first relationship data (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
11. Regarding claim 4., Hildreth discloses the method according to claim 3, wherein accessing the first attribute of the association relationship resource object to obtain the first relationship data indicating the associated user, comprises: through accessing the first attribute of the association relationship resource object, acquiring at least one first identifier and at least one second identifier (fig. 1-3, paragraph 49 wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits);
storing the first identifier and the second identifier with the same identification information in a paired mode to obtain a pairing table containing at least one pairing record (fig. 1-3, paragraph 111 wherein registering users may involve capturing images of a known user and storing the captured images for use in automatically identifying the known user in later processes);
and generating the first relationship data according to the pairing table (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
12. Regarding claim 6, Hildreth discloses the method according to claim 2, wherein obtaining the association relationship list corresponding to the effect according to the association relationship resource object and the corresponding image model, comprises: in response to the image model being a dynamic image, accessing a second attribute of the association relationship resource object to obtain second relationship data indicating the associated user (fig. 2 and 13-14, paragraph 142 wherein the processor then may compare the position of each of the multiple users to the position of the remote control and identify the user whose position is closest to the remote control) ,wherein the second relationship data comprises a preset number of third identifiers, each of the preset number of third identifiers corresponds to user information of the associated user, and the preset number is the number of image frames of the dynamic image (fig. 2 and 11, paragraph 142 wherein processor may select the user whose position is closest to the remote control as the user operating the remote control and determine that the other users in the accessed images are not operating the remote control.);
and generating the association relationship list corresponding to the effect according to the second relationship data (fig. 11, paragraph 143 wherein processor may associate a detected user in a camera image with a command transmitted by a remote control based on the position of the remote control relative to detected users within a camera image).
13. Regarding claim 7, Hildreth discloses the method according to claim 5, wherein obtaining the association relationship list corresponding to the effect according to the association relationship resource object and the corresponding image model, comprises: determining a number of the corresponding associated users according to the association relationship resource object and the corresponding image model (fig. 1-3, paragraphs 40-43 and 73 wherein system identifies viewer and provides selectable content favored by the identified viewer for selection);
and obtaining the association relationship list corresponding to the effect according to the number of the associated users (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
14. Regarding claim 9, Hildreth discloses the method according to claim 1, wherein acquiring the corresponding association relationship information from the server based on the association relationship list, comprises: acquiring an association relationship type, and determining an association relationship list from at least two association relationship lists (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members);
and based on the association relationship list, acquiring the corresponding association relationship information from the server (fig. 14-15, paragraph 170 wherein system obtains personalized preferences of users from server over network).
15. Regarding claim 10, Felkai discloses the method according to claim 1, wherein, after generating the corresponding effect image based on the association relationship information, the method further comprises: generating a video based on the effect image (fig. 4-5, paragraph 66 wherein an animation of the remote user may be superimposed, e.g., by computing device, over the media content based on the received visual data).
Hildreth discloses after the video is released, acquiring at least one piece of association relationship information, wherein each of the at least one piece of association relationship information is the association relationship information corresponding to an effect image displayed in a display pose in the video (fig. 13-14, paragraph 164 wherein processor may ignore content items marked by a user as private and ensure that all of the content items included on the combined list are appropriate for all of the detected users (e.g., mature programs that otherwise would be included in the combined list may not be included in the combined list when children are present));
and transmitting hit information to the server based on the association relationship information, wherein the hit information is configured to enable the server to transmit a notification message to an associated user corresponding to the association relationship information (fig. 13-14, paragraph 164 wherein processor may ignore content items marked by a user as private and ensure that all of the content items included on the combined list are appropriate for all of the detected users (e.g., mature programs that otherwise would be included in the combined list may not be included in the combined list when children are present)).
16. Regarding claim 11, Hildreth discloses the method according to claim 10, further comprising: obtaining a notification toggle state by loading the custom asset file (fig. 4-6, paragraph 70 wherein buttons may allow a user to control personalized media preferences. For example, a user may use the buttons to add or delete a channel from the user's personalized favorite channels list);
and wherein transmitting the hit information to the server based on the association relationship information, comprises: transmitting the hit information to the server based on the association relationship information in response to the toggle state being a state (fig. 14-15, paragraph 170 wherein system obtains personalized preferences of users from server over network).
17. Regarding claim 12, Hildreth discloses the method according to claim 10, wherein the hit information comprises at least one of the following: an identification identifier of the current user, an identification identifier of the effect, and video release time (fig. 7-9, paragraph 137 wherein processor may access images that are continuously (or regularly/periodically) captured and identify images taken at a time when the user input command was received).
18. Regarding claim 13, Hildreth discloses an electronic device, comprising: a processor and a memory; wherein the memory stores a computer execution instruction; and the processor executes the computer execution instruction stored in the memory to implement: in response to triggering of an effect by a current user, acquiring a custom asset file corresponding to the effect, wherein the effect is configured to display at least one frame of effect image (fig. 5, paragraphs 48-49 and 54 wherein third-party device may communicate with any type of media device and be allowed to control the media settings of any type of media device, and wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits);
acquiring association relationship information from a server based on the association relationship list, wherein the association relationship information indicates a user identifier of an associated user having an association relationship with the current user (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
However, Hildreth is silent in regards to disclosing generating a corresponding effect image based on the association relationship information.
Felkai discloses generating a corresponding effect image based on the association relationship information (fig. 4-5, paragraph 66 wherein an animation of the remote user may be superimposed, e.g., by computing device, over the media content based on the received visual data). Felkai (paragraph 26) provides motivation to combine the references wherein the one or more superimposed animations may be rendered by computing device based on visual data received from the remote computing devices.
However, Hildreth and Felkai are silent in regards to disclosing loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect.
Bratton discloses loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect (paragraph 45 wherein each video asset is preferably a separate date file from assets).
Bratton provides motivation to combine the references wherein the composite video includes at least a video asset and a non-video asset in separate files (paragraph 11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hildreth and Felkai, with the teachings of Bratton (paragraph 11).
19. Regarding claim 14, Hildreth discloses the electronic device according to claim 13, the processor executes the computer execution instruction stored in the memory to further implement: loading the custom asset file from the project file corresponding to the effect through the resource reference interface of the effect, so as to obtain at least one association relationship resource object and an image model corresponding to each of the at least one association relationship resource object, wherein the association relationship resource object is configured to provide the association relationship information for the effect image corresponding to the image model (fig. 1-3, paragraphs 40-43 and 73 wherein system identifies viewer and provides selectable content favored by the identified viewer for selection);
and obtaining the association relationship list corresponding to the effect according to the association relationship resource object and the corresponding image model (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
20. Regarding claim 15, Hildreth discloses the electronic device according to claim 14, the processor executes the computer execution instruction stored in the memory to further implement: in response to the image model being a single-frame image, accessing a first attribute of the association relationship resource object to obtain first relationship data indicating the associated user, wherein the first relationship data comprises a first identifier and a corresponding second identifier, the first identifier represents an associated user name of the associated user, and the second identifier represents an associated user avatar of the associated user (fig. 1-3, paragraphs 49 and 52 wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits, and wherein system may be configured to display images such as an avatar);
and generating the association relationship list corresponding to the effect according to the first relationship data (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
21. Regarding claim 16, Hildreth the electronic device according to claim 15, the processor executes the computer execution instruction stored in the memory to further implement: through accessing the first attribute of the association relationship resource object, acquiring at least one first identifier and at least one second identifier (fig. 1-3, paragraph 49 wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits);
storing the first identifier and the second identifier with the same identification information in a paired mode to obtain a pairing table containing at least one pairing record (fig. 1-3, paragraph 111 wherein registering users may involve capturing images of a known user and storing the captured images for use in automatically identifying the known user in later processes);
and generating the first relationship data according to the pairing table (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
22. Regarding claim 17, Hildreth discloses the electronic device according to claim 16, the processor executes the computer execution instruction stored in the memory to further implement: independently storing the first identifier or the second identifier respectively which does not have the same identification information to obtain a non-pairing table (fig. 1-3, paragraph 111 wherein registering users may involve capturing images of a known user and storing the captured images for use in automatically identifying the known user in later processes);
wherein generating the first relationship data according to the pairing table, comprises: generating the first relationship data according to the pairing table and the non-pairing table (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
23. Regarding claim 18, Hildreth discloses the electronic device according to claim 14, the processor executes the computer execution instruction stored in the memory to further implement: in response to the image model being a dynamic image, accessing a second attribute of the association relationship resource object to obtain second relationship data indicating the associated user, wherein the second relationship data comprises a preset number of third identifiers, each of the preset number of third identifiers corresponds to user information of the associated user, and the preset number is the number of image frames of the dynamic image (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members);
and generating the association relationship list corresponding to the effect according to the second relationship data (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
24. Regarding claim 19, Hildreth discloses the electronic device according to claim 17, the processor executes the computer execution instruction stored in the memory to further implement: determining a number of the corresponding associated users according to the association relationship resource object and the corresponding image model (fig. 1-3, paragraphs 40-43 and 73 wherein system identifies viewer and provides selectable content favored by the identified viewer for selection);
and obtaining the association relationship list corresponding to the effect according to the number of the associated users (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
25. Regarding claim 20, Hildreth discloses a computer-readable storage medium, wherein the computer- readable storage medium stores a computer execution instruction that, when executed by a processor, causes the processor to implement: in response to triggering of an effect by a current user, acquiring a custom asset file corresponding to the effect, wherein the effect is configured to display at least one frame of effect image (fig. 5, paragraphs 48-49 and 54 wherein third-party device may communicate with any type of media device and be allowed to control the media settings of any type of media device, and wherein viewing habits may be used by the third party to suggest customized media settings for the dad user and the mom user based on the viewing habits);
acquiring association relationship information from a server based on the association relationship list, wherein the association relationship information indicates a user identifier of an associated user having an association relationship with the current user (fig. 1-3, paragraph 112 wherein system provides a parent with the ability to register the faces of all members of a household including children, and set media preferences for each family member, or specific combinations of family members).
However, Hildreth is silent in regards to disclosing generating a corresponding effect image based on the association relationship information.
Felkai discloses generating a corresponding effect image based on the association relationship information (fig. 4-5, paragraph 66 wherein an animation of the remote user may be superimposed, e.g., by computing device, over the media content based on the received visual data). Felkai (paragraph 26) provides motivation to combine the references wherein the one or more superimposed animations may be rendered by computing device based on visual data received from the remote computing devices.
However, Hildreth and Felkai are silent in regards to disclosing loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect.
Bratton discloses loading the custom asset file from a project file corresponding to the effect through a resource reference interface of the effect to obtain an association relationship list corresponding to the effect (paragraph 45 wherein each video asset is preferably a separate date file from assets).
Bratton provides motivation to combine the references wherein the composite video includes at least a video asset and a non-video asset in separate files (paragraph 11). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Hildreth and Felkai, with the teachings of Bratton (paragraph 11).
26. Claims 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Hildreth, in view of Felkai, in further view of Bratton, in further view of Zito (US 2016/0021412), hereinafter referred to as Zito.
27. Regarding claim 5, Hildreth, Felkai and Bratton are silent in regards to disclosing the method according to claim 4, further comprising: independently storing the first identifier or the second identifier respectively which does not have the same identification information to obtain a non-pairing table; wherein generating the first relationship data according to the pairing table, comprises: generating the first relationship data according to the pairing table and the non-pairing table.
Zito discloses the method according to claim 4, further comprising: independently storing the first identifier or the second identifier respectively which does not have the same identification information to obtain a non-pairing table (fig. 9-10, paragraphs 86-87 wherein dimensions such as demographics can be further specified with the aid of viewer-identifiers that can tap a database on the storage subsystem storage, and wherein multi-media presentation system may also determine if there is hierarchy or relationship among viewers, such as but not limited to parent-child, supervisor-supervised, husband-wife, etc. through the database on individual viewers);
wherein generating the first relationship data according to the pairing table, comprises: generating the first relationship data according to the pairing table and the non-pairing table (fig. 9-10, paragraphs 86-87 wherein dimensions such as demographics can be further specified with the aid of viewer-identifiers that can tap a database on the storage subsystem storage, and wherein multi-media presentation system may also determine if there is hierarchy or relationship among viewers, such as but not limited to parent-child, supervisor-supervised, husband-wife, etc. through the database on individual viewers). Zito (paragraph 2) provides motivation to combine the references wherein the present disclosure relates to dynamic creation of an art form combined from real-time environmental variables, saved selections and favorites coded to an identified viewer. All of the elements are known. Combining the references would yield the instant claims wherein system uses database information to compare and identify a given viewer of content. Therefore, the invention would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention.
28. Regarding claim 8, Zito discloses the method according to claim 7, wherein determining the number of the corresponding associated users according to the association relationship resource object and the corresponding image model, comprises: acquiring the number of first associated users, wherein the number of the first associated users is a larger value of a capacity of the pairing table corresponding to first relationship data and a preset number corresponding to second relationship data (fig. 9-10, paragraphs 86-87 wherein dimensions such as demographics can be further specified with the aid of viewer-identifiers that can tap a database on the storage subsystem storage, and wherein multi-media presentation system may also determine if there is hierarchy or relationship among viewers, such as but not limited to parent-child, supervisor-supervised, husband-wife, etc. through the database on individual viewers);
and determining the number of the associated users according to a sum of the number of the first associated users and the capacity of the non-pairing table corresponding to the first relationship data (fig. 9-10, paragraphs 86-87 wherein dimensions such as demographics can be further specified with the aid of viewer-identifiers that can tap a database on the storage subsystem storage, and wherein multi-media presentation system may also determine if there is hierarchy or relationship among viewers, such as but not limited to parent-child, supervisor-supervised, husband-wife, etc. through the database on individual viewers).
Conclusion
29. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES N HICKS whose telephone number is (571)270-3010. The examiner can normally be reached Monday-Friday 10-7 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached on 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES N HICKS/Examiner, Art Unit 2424
/BENJAMIN R BRUCKART/Supervisory Patent Examiner, Art Unit 2424