DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims Status
Claims 1-20 are pending and rejected.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/19/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Claim Objections
The claims are objected to because of the following informalities:
In claim 20, “device of claim 1" should read – device of claim 12 —
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1:
Claims 1-11 are directed to a method, which is a process. Claims 11-20 are directed to a device, which is a machine. Therefore, claims 1-20 are directed to one of the four statutory categories of invention.
Step 2A (Prong 1):
Taking claim 1 as representative, claim 1 sets forth the following limitations which recite the abstract idea of providing product recommendations:
acquiring first user information;
applying the first user information to a relationship learning model based on a rendering history corresponding to content; and
providing content recommendation information corresponding to the first user information using the output information of the learning model.
The recited limitations as a whole set forth the process for providing recommendations. These limitations amount to certain methods of organizing human activity, including commercial or legal interactions (e.g. advertising, marketing or sales activities or behaviors).
Such concepts have been identified by the courts as abstract ideas (see: MPEP 2106).
Step 2A (Prong 2):
Examiner acknowledges that representative claim 1 does recite additional elements, such as a device, etc.
Taken individually and as a whole, claim 1 does not integrate the recited judicial exception into a practical application of the exception. The claim merely includes instruction to implement an abstract idea on a computer, or to merely use a computer as a tool to perform an abstract idea, while the additional elements do no more than generally link the use of a judicial exception to a particular field of technological environment or field of use.
Furthermore, this is also because the claim fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement a judicial exception with a particular machine, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
In view of the above, under Step 2A (Prong 2), claim 1 does not integrate the recited exception into a practical application (see again: MPEP 2106).
Step 2B:
When taken individually or as a whole, the additional elements of claim 1 do not provide an inventive concept (i.e. whether the additional elements amount to significantly more than the exception itself). As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer device to perform steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Certain additional elements also recite well-understood, routine, and conventional activity (See MPEP 2106.05(d)).
Even when considered as an ordered combination, the additional elements of claim 1 do not add anything further than when they are considered individually.
In view of the above, claim 1 does not provide an inventive concept under step 2B, and is ineligible for patenting.
Dependent claims 2-11 recite further complexity to the judicial exception (abstract idea) of claim 1, such as by further defining the process for providing recommendations. Thus, each of claims 2-11 are held to recite a judicial exception under Step 2A (Prong 1) for at least similar reasons as discussed above.
Therefore, dependent claims 2-11 do not add “significantly more” to the abstract idea. The dependent claims recite additional functions that describe the abstract idea and only generally link the abstract idea to a particularly technological environment, and applied on a generic computer. Further, the additional limitations fail to provide an improvement to the functioning of the computer, another technology, or a technical field.
Even when viewed as an ordered combination, the dependent claims simply convey the abstract idea itself applied on a generic computer and are held to be ineligible under Steps 2A/2B for at least similar rationale as discussed above regarding claim 1.
The analysis above applies to all statutory categories of invention. Regarding independent claim 12 (device), the claim recites substantially similar limitations as set forth in claim 1. As such, claim 12 and its dependents 13-20 are rejected for at least similar rationale as discussed above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yankovich et al. (U.S. Pre-Grant Publication No. 2019/0156377 A1) (“Yankovich”).
Regarding claims 1 and 12, Yankovich teaches a method (and related device) of operating a recommendation information providing device, the method comprising:
acquiring first user information (para [0045]);
applying the first user information to a relationship learning model based on a rendering history corresponding to content (para [0046]); and
providing content recommendation information corresponding to the first user information using the output information of the learning model (para [0047]).
Regarding claims 2 and 13, Yankovich teaches the above method and device of claims 1 and 12. Yankovich also teaches wherein the relationship learning model is a model in which rendering-based visual feature information of the content according to the provision of a content service is pre-trained in response to metadata feature information and user metadata of the content (para [0055]-[0057]).
Regarding claims 3 and 14, Yankovich teaches the above method and device of claims 2 and 13. Yankovich also teaches wherein the metadata feature information of the content is configured from at least one of rendering condition metadata acquired in connection with rendering of the content, model information metadata corresponding to the content, and product information metadata corresponding to the content (para [0055]-[0057]).
Regarding claims 4 and 15, Yankovich teaches the above method and device of claims 3 and 14. Yankovich also teaches wherein the rendering condition metadata comprises first metadata corresponding to a detailed attribute of the content, second metadata corresponding to a rendering condition of the content, and third metadata corresponding to a surrounding environment in which the content is rendered (para [0055]-[0057]).
Regarding claims 5 and 16, Yankovich teaches the above method and device of claims 4 and 15. Yankovich also teaches wherein the first metadata comprises at least one of a length, a color, a category, and a type corresponding to the detailed attribute of the content (para [0042]),
wherein the second metadata comprises at least one of whether it is virtual reality rendering, whether it is augmented reality rendering, an illumination condition, a camera position, a camera focus, a camera angle, and whether it is composite scene content corresponding to the rendering state of the content (para [0065]), and
wherein the third metadata comprises at least one of region type information, region size information, and region material indexing information corresponding to the rendered surrounding environment (para [0043]).
Regarding claim 6, Yankovich teaches the above method of claim 1. Yankovich also teaches wherein the user recommendation information comprises at least one of basic user information collected in response to content, rendering interface input information corresponding to the basic user information, preference input information, purchase input information, evaluation input information, and category input information (para [0064]).
Regarding claims 7 and 17, Yankovich teaches the above method and device of claims 1 and 12. Yankovich also teaches wherein the visual feature information comprises a visual feature vector acquired by applying one or more rendering images acquired in response to the content to a deep learning network (para [0097]).
Regarding claims 8 and 18, Yankovich teaches the above method and device of claims 7 and 17. Yankovich also teaches wherein the one or more rendering images comprise images in which the content or a scene image including the content is rendered for each preset three- dimensional viewpoint (para [0068], [0092]).
Regarding claims 9 and 19, Yankovich teaches the above method and device of claims 8 and 18. Yankovich also teaches wherein the visual feature information comprises a visual feature vector extracted from the deep learning network according to geometric structure information and color information according to the three-dimensional viewpoint of the rendering image (para [0068], [0092]).
Regarding claim 10, Yankovich teaches the above method of claim 1. Yankovich also teaches wherein the providing of the content recommendation information comprises:
indexing recommended products or recommended interior items in response to recommended visual feature information and recommended condition metadata output from the learning model (para [0062]);
constructing recommended item list information corresponding to the first user information using the recommended products or recommended interior items (para [0062]); and
providing the recommended item list information to a user terminal corresponding to the first user information (para [0062]).
Regarding claim 11, Yankovich teaches the above method of claim 10. Yankovich also teaches further comprising: identifying rendering interface environment information corresponding to each item in the recommended item list; and providing the rendering interface environment information to the user terminal to process the recommended item selected by the user terminal to be output through a rendering environment constructed according to the rendering interface environment information (para [0068], [0092]).
Regarding claim 20, Yankovich teaches the above device of claim 12. Yankovich also teaches wherein the information providing unit indexes recommended products or recommended interior items in response to recommended visual feature information and recommended condition metadata output from the learning model, constructs recommended item list information corresponding to the first user information using the recommended products or recommended interior items, provides the recommended item list information to a user terminal corresponding to the first user information, identifies rendering interface environment information corresponding to each item in the recommended item list, and provides the rendering interface environment information to the user terminal to process the recommended item selected by the user terminal to be output through a rendering environment constructed according to the rendering interface environment information (para [0062], [0068], [0092]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANAND LOHARIKAR whose telephone number is 571-272-8756. The examiner can normally be reached Monday through Friday, 9am – 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANAND LOHARIKAR/Primary Examiner, Art Unit 3689