DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d) to PCT Patent Application No. JP2022-132892, filed on 08/24/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Status of Claims
Applicant’s communications filed on 02/05/2026 have been considered.
Claims 1, 3-5, and 12-13 have been amended.
Claims 1-13 are currently pending and have been examined.
Response to Arguments
Applicant’s arguments filed with respect to the previously filed objection of claims 3-5 have been considered and are persuasive.
Applicant’s remarks (Remarks Page 6), as well as the amendments to claims 3-5, have been considered and are persuasive. The objections have herein been withdrawn.
Applicant’s arguments filed with respect to the rejection of claims under 35 USC 101 have been fully considered but they are not persuasive.
Applicant argues that the claims are not directed to an abstract idea because the claim “specifies how the information is obtained, processed, and used by the apparatus to control a display operation,” and “constitutes a technological process performed by a machine” (Remarks Page 6). This argument has been considered and is not persuasive. The MPEP enumerates groupings of abstract ideas, thereby synthesizing the holdings of various court decisions to facilitate examination. See MPEP 2106.04. Among the enumerated groupings is the Certain Methods of Organizing Human Activity grouping, which includes activity that falls within the enumerated sub-grouping of commercial or legal interactions, including subject matter relating to agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. With respect to the claim amendments, an information processing apparatus comprising: a network interface; a processor; a user terminal; a database; an arrangement-information database; a display of the user terminal; and a non-transitory computer readable medium storing a program causing a computer to execute a process have been analyzed as additional elements and accordingly are not analyzed under Step 2A, Prong 1. The amendments further recite limitations such as obtaining an image from a user in which an article is shown; extracting an object region corresponding to the article from the obtained image; extracting features of the object region and identifying the article; extracting characteristics including at least one of a size of the article, a color of the article, or a location of the article; search for arrangement-related information, the arrangement-related information including a storage good for arranging the article and a method for arranging the article; and exert control to display a search result. These amendments represent certain methods of organizing human activity. Paragraph [0032] of the specification illustrates that a product used for storage of an article may be an item for sale which is sold somewhere, and paragraph [0053] further discusses that user search results are items for sale, and accordingly the recited limitations claim collection and analysis of image data as search information for the purposes of displaying items for sale as search results. These limitations fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in the MPEP, in that they recite commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claim limitations represent sales activities and behaviors because the limitations recite collection and analysis of image data of a product in order to provide search results regarding the product, including appropriate storage goods for sale. These are sales activities because they pertain to items for sale and provide a user with a way to search for said items (see at least Specification [0034]). Accordingly, the claims recite Certain Methods of Organizing Human Activity.
Applicant argues that the claims integrate the abstract idea into a practical application because “[the claim] is a specific technological implementation governing how information is processed and displayed” (Remarks Pages 6 and 7). This argument has been considered and is not persuasive. The MPEP sets forth, in Step 2A Prong Two, that a claim that recites a judicial exception is not directed to that judicial exception, if the claim as a whole “integrates the recited judicial exception into a practical application of that exception.” The evaluation of Prong Two requires the use of the considerations (e.g. improving technology, effecting a particular treatment or prophylaxis, implementing with a particular machine, etc.) identified by the Supreme Court and the Federal Circuit, to ensure that the claim as a whole ‘integrates [the] judicial exception into a practical application [that] will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception.’ In the instant case, the claims include additional elements such as an information processing apparatus comprising: a network interface; a processor; a user terminal; a database; an arrangement-information database; a display of the user terminal; and a non-transitory computer readable medium storing a program causing a computer to execute a process. While these elements are recited, they are merely peripherally incorporated in order to implement the abstract idea. Put another way, these additional elements are merely used to apply the abstract idea of providing information about products in a technological environment without effectuating any improvement or change to the functioning of the additional elements or other technology. Applicant’s disclosure does not articulate or suggest how these additional elements function, individually or in combination, in any manner other than using generic functionality nor does the disclosure articulate how the elements provide a technical improvement. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they merely amount to using the software architecture as a tool to perform the abstract idea.
Applicant argues that the claim amounts to significantly more than the abstract idea because “[the claim reflects] a specific technological solution for retrieving and presenting arrangement-related information based on image-derived data” (Remarks Page 7). This argument has been considered and is not persuasive. As discussed above, Applicant’s disclosure, including the amended claims and specification, does not suggest how the claimed additional elements function, individually, or in combination, in any manner other than using functionality, such that the claims amount to mere implementation of the abstract idea in a technological environment without effectuating any improvement or change to the functioning of the additional elements or other technology. Accordingly, the claims do not recite a specific technological solution, and the rejection has been maintained.
The rejections of independent claims 12 and 13 inherit the same deficiencies as claim 1, discussed in the above paragraphs, therefore the rejections have been maintained.
Applicant’s arguments filed with respect to the rejection of claims under 35 USC 102 and 35 USC 103 have been fully considered but are rendered moot under new grounds of rejection.
Applicant argues that the amended claims overcome the currently cited prior art because Besecker fails to disclose or suggest the amended limitations (Remarks Page 7). This argument has been rendered moot under new grounds of rejection, as newly cited Zia has been relied upon to teach searching a database by using the extracted characteristics as a search key. With regards to Applicant’s arguments that Besecker does not teach the amended limitations of the independent claims, including extracting image features for comparison against stored article-image data, this argument is not persuasive because Besecker has been further relied upon to teach a network interface (see at least Besecker [0055]), disclosing the communications between the user’s mobile computing device and remote computing device 118 occur over a network 116. Besecker has been further relied upon to teach obtain, via the network interface, an image provided from a user terminal in which an article is shown, at (Besecker, [0100-0105][0133]), disclosing that images are provided from the user’s mobile computing device and further analyzed in order to identify products in the images. Besecker further discloses extracting an object region corresponding to the article from the obtained image at (Besecker, [0100-0105][0133]), where the mobile device includes an image analysis algorithm capable of identifying pixels and outlines of products in the received image. Besecker further discloses extracting features of the object region and identifying the article by comparing the extracted features with article-image data stored in a database, at (Besecker, [0127-0128][0133][0154][0156]), disclosing that products, colors, layout, and other features of the room may be identified in an inspiration image, where the image analysis algorithm matches the identified products and colors in the image with a stored inspiration image in the master resource database 122. The image analysis algorithm additionally extracts object identifiers from the image in order to identify the objects. Besecker has been further relied upon to teach extract, from the image and the object region, characteristics including at least one of a size of the article, a color of the article, or a location of the article, at (Besecker, [0127][0156][0172]), disclosing that the image analyzer/image analysis algorithm identifies items of interest in the image, along with their color and/or patterns visible on the items, and matches identified products and colors in the received inspiration image with an inspiration image stored in a database. This portion further discloses that the size of products may be determined from the inspiration image, based on the amount of space available in the room and rules defining location. Besecker further discloses search, in an arrangement-information database, for arrangement-related information by using the identified article as search keys, the arrangement-related information including a storage good for arranging the article and a method for arranging the article, at (Besecker, [0108][0125][0127-0128][0157]), disclosing that, after identification, the products of the inspiration image 1710 are placed in the user’s AR-room environment in accordance with the room layout, where the system can further identify additional products, including brackets for mounting, to suggest to the user via the identifiers of the respective products, including tags and shared categories. The image analyzer further extracts arrangement and/or décor styling information to guide the selection and/or placement of 3D models within the AR rendering of the user’s environment. The information used for identification of objects within an image is stored in the local resource database 106. Besecker has been further relied upon to teach exerting control to display, on a display of the user terminal, a search result including the arrangement-related information obtained by the search, at (Besecker, [0055][0108]), disclosing that the additional products suggested to the user are identified via the product identifier extracted from the image, where the product identifier may be linked with related product identifiers, including a tag or shared category, in the local resource database 106 or master resource database 122, and those products with tags matching the identified product, such as mounting brackets, are added to the user’s cart. Accordingly, the rejection has been maintained under new grounds of rejection, in view of the combination of Besecker/Zia.
With regards to Applicant’s argument that the proposed combinations rely on impermissible hindsight reconstruction (Remarks Page 7 and 8), this argument has been considered and is rendered moot under the new grounds of rejection discussed above. Examiner notes that the obviousness rejections made under 35 USC 103, in the previously filed Non-Final rejection (11/5/2025), were made using only knowledge that was within the level of ordinary skill in the art at the time that the claimed invention was made, and did not include knowledge gleaned only from applicant's application (see Non-Final Rejection, pages [14-20]), and accordingly the rejection is proper. See MPEP 2145(x)(A). Examiner points Applicant’s attention to the motivation paragraphs describing the proper reasoning for the combinations set forth in the 103 rejection of the claims (see Non-Final Rejection, [Page 16, 18 and 20]).
Independent claims 12 and 13 recite substantially similar subject matter to that claimed in independent claim 1, and therefore claims 12 and 13 stand rejected for similar reasons.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite an abstract idea. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Under Step 1 of the Subject Matter Eligibility Test for Products and Processes, the claims must be directed to one of the four statutory categories. See MPEP 2106.03. Claims 1-11 are directed towards a machine. Claim 12 is directed towards a manufacture. Claim 13 is directed towards a process. Therefore, claims 1-13 are directed to one of the four statutory categories (Step 1: YES, regarding claims 1-13).
Under Step 2A of the MPEP, it is determined whether the claims are directed to a judicially recognized exception. See MPEP 2106.04. Step 2A is a two-prong inquiry.
Under Prong 1, it is determined whether the claim recites a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception.
Taking Claim 1 as representative, claim 1 recites limitations that fall within the certain methods of organizing human activity groupings of abstract ideas, including:
obtain an image provided from a user in which an article is shown;
extract an object region corresponding to the article from the obtained image;
extract features of the object region and identify the article by comparing the extracted features with article-image data stored in a storage;
extract, from the image and the object region, characteristics including at least one of a size of the article, a color of the article, or a location of the article;
search, in an arrangement-information storage, for arrangement-related information by using the identified article and the extracted characteristics as search keys, the arrangement-related information including a storage good for arranging the article and a method for arranging the article; and
exert control to display a search result including the arrangement-related information obtained by the search.
Claims 12 and 13 recites the same limitations believed to be abstract as recited in claim 1, and claim 13 additionally recites an information processing method.
Claim 1, as exemplary, recites the abstract idea of providing information about products. These recited limitations fall within the "Certain Methods of Organizing Human Activities" Grouping of abstract ideas as it relates to commercial interactions of sales activities or behaviors.
The claim reciting limitations including “extracting an object region corresponding to the article from the obtained image; extract[ing] features of the object region and identify[ing] the article by comparing the extracted features with article-image data stored in a storage; and extract, from the image and the object region, characteristics…” further falls within the “Mental Processes” grouping of abstract ideas, in that they recite observation, evaluation, and judgement (comparing). Accordingly, the claim recites an abstract idea. See MPEP 2106.04.
Accordingly, under Prong One of Step 2A of the Alice/Mayo test, claims 1, 12 and 13 recite an abstract idea (Step 2A, Prong One: YES).
Under Prong 2, it is determined whether the claim recites additional elements that integrate the exception into a practical application of the exception.
Claim 1 recites additional elements beyond the judicial exception(s), including an information processing apparatus comprising: a network interface; a processor; a user terminal; a database; an arrangement-information database; and a display of the user terminal. Claim 12 recites the same additional elements as recited in claim 1, and additionally recites a non-transitory computer readable medium storing a program causing a computer to execute a process. Claim 13 recites the same additional elements as recited in claim 1.
These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. As such, these computer-related limitations are not found to be sufficient to integrate the abstract idea into a practical application. Claims 1, 12 and 13 specifying that the abstract idea of providing information about products is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer. As such, under Prong Two of Step 2A of the Alice/Mayo test, when considered both individually and as a whole, the limitations of claims 1, 12 and 13 are not indicative of integration into a practical application (Step 2A, Prong Two: NO).
Since claims 1, 12 and 13 recite an abstract idea and fail to integrate the abstract idea into a practical application, claims 1, 12 and 13 are “directed to” an abstract idea (Step 2A: YES). Accordingly, the judicial exception is not integrated into a practical application.
Next, under Step 2B, the instant claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements of an information processing apparatus comprising: a network interface; a processor; a user terminal; a database; an arrangement-information database; a display of the user terminal; and a non-transitory computer readable medium storing a program causing a computer to execute a process amount to no more than mere instructions to apply the exception using generic computer components. For the same reason these elements are not sufficient to provide an inventive concept. Therefore when considering the additional elements alone, and in combination, there is no inventive concept in the claim, and thus the claim is not patent eligible (Step 2B: NO).
Dependent claims 2-11, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. As for dependent claims 2-9, these claims recite limitations that further define the same abstract idea noted in independent claims 1, 12 and 13, and do not recite any additional elements other than what is disclosed in independent claims 1, 12 and 13. Therefore, claims 2-9 are considered patent ineligible for the reasons given above.
As for dependent claims 10 and 11, these claims recite limitations that further define the abstract idea noted in independent claims 1, 12 and 13. Additionally, they recite the following additional limitations:
provide, to the user, a result from a Web search… the Web search being performed on a basis of a word….
The additional element of a Web search is recited at a high level of generality such that it amounts to no more than instructions to apply the judicial exception in a generic technological environment. Even in combination, this additional element does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea itself. Accordingly, under the Alice/Mayo test, claims 1-13 are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-9 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over previously cited Besecker in view of newly cited U.S Patent Application No. 2019/0197599 A1 to Zia et al., hereinafter Zia.
Regarding Claim 1, Besecker discloses An information processing apparatus comprising ([0198] computing system 2500):
a network interface ([0055]) network 116; and
a processor configured to ([0199] Computing system 2500 includes one or more processors):
obtain, via the network interface, an image provided from a user terminal in which an article is shown ([0100-0103] while searching for products, application 114 of mobile computing device 102 may interface with the user to select a product 504 to add to an AR list by selecting and dragging an image of the product to an “add to AR list” tool 506; [0104-0105] the images provided may be pre-processed to identify products in the image based on a product identifier and selected for rendering in the 3D model; [0133] an inspiration image 810 may be received, and one or more products may be identified in the image and added to the AR list);
extract an object region corresponding to the article from the obtained image ([0057] mobile computing device 102 (e.g., via image analyzer 110) configured to extract relevant feature data from the image (e.g., resolution, the field of view (FOV), focal length, imaging center, etc.; [0127-0128] the image analyzer may process the 2D image and identify one or more types of objects and their relative arrangement within the 2D image, and extract object identifiers; [0156] the image analysis algorithm can identify outlines of products in the image by identifying similar lines, textures, or other image qualities and attributing the similar pixels with stored images of room features or products);
extract features of the object region and identify the article by comparing the extracted features with article-image data stored in a database ([0154] a set of inspiration images is provided at a user interface, where products, colors, layout, and other features of the room may be identified in the inspiration image; [0156] components of the inspiration image 1710 may be identified by an image analysis algorithm by matching the identified products and colors in the inspiration image with a stored inspiration image … see [0127-0128] the image analyzer identifies one or more types of objects, and the extracted object identifiers may be used to guide the identification and/or selection of similarly functional products available within the database; [0133] an inspiration image 810 may be received from the stored images in the master resource database 122, and one or more products may be identified in the image);
extract, from the image and the object region, characteristics including at least one of a size of the article, a color of the article, or a location of the article ([0127] image analyzer utilizes machine learning to identify items of interest and may sample the color and/or patterns visible on one or more object; [0156] The image analysis algorithm may match the identified products and colors in the inspiration image with a stored inspiration image; [0162] the size of the products corresponding with the inspiration image may be determined based on the amount of space is available in the user's room dimensions and rules and/or metadata defining fit, location, and/or compatibility);
search, in an arrangement-information database, for arrangement-related information by using the identified article as search keys, the arrangement-related information including a storage good for arranging the article and a method for arranging the article ([0157] the products in the inspiration image 1710 may be placed in the user’s AR-room in accordance with the rules corresponding with a room layout; [0127-0128] the image analyzer may process the 2D image and identify one or more types of objects and their relative arrangement within the 2D image, and extract object identifiers used to guide the identification and/or selection of similarly functional products available within the local resource database 106; [0108] the system can identify additional products a user may want or need—e.g., brackets for mounting or other ancillary items needed for use with the products. For example, the product identifier may be linked with related product identifiers (e.g., via a tag or shared category) in local resource database 106 or master resource database 122… see [0125] the selected 2D image of the desired layout may be processed by image analyzer 110 to extract arrangement and/or décor styling information, for example, to guide the (e.g., automatic) selection and/or placement of 3D models within the 3D virtual environment(s)); and
exert control to display, on a display of the user terminal, a search result including the arrangement-related information obtained by the search ([0108] the system can identify additional products a user may want or need—e.g., brackets for mounting or other ancillary items needed for use with the products. For example, the product identifier may be linked with related product identifiers (e.g., via a tag or shared category) in local resource database 106 or master resource database 122… the system may determine the tag associated with the main product and search for any other products that share the same tag (e.g., mounting brackets, etc.), and add those products to the user’s cart… see [0055] mobile computing device 102 includes software application 114, which interfaces with the user to compose the virtual environment; [0062] user adds products to the cart via mobile computing device 102);
But does not explicitly disclose searching a database by using the extracted characteristics as a search key.
Zia, on the other hand, teaches a similar system for providing arrangement-related information for products in an environment (see at least Zia [0014]), and additionally teaches the following:
searching a database by using the extracted characteristics as a search key ([0018] the information extracted from analyzing an image is used, in combination with explicit and/or inferred end-user preference information, to query one or more databases of products to quickly and efficiently identify products that may be both complementary to those products identified in the image, and suiting the preferences and tastes of the end-user; [0038] the attributes of the objects identified by the object recognition module 306 may be used to identify products having similar attributes. For example, if the object recognition analysis indicates that an end-user has a furniture item that is from a particular designer or brand, this information may be useful for querying the product database to find other products that will complement those identified in the end-user's room).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Besecker, searching a database by using the extracted characteristics as a search key, as taught by Zia, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Besecker, to include the teachings of Zia, in order to aid a first end-user with the selection and placement of objects, including home furnishings and related products, in an augmented reality scene generated by a mobile computing device (Zia, [0001]).
Regarding Claim 2, Besecker and Zia teach the limitations of claim 1.
Besecker further discloses wherein the arrangement-related information includes information about a product used to arrange the article ([0092] system 100 and/or mobile computing device 102 may be configured to render “assemblies,” which are objects mounted on other objects and/or arranged into some kind of layout… system 100 and/or mobile computing device 102 may be configured to allow for the use of rules and/or metadata defining fit, location, and/or compatibility, which may provide the ability to mount an object on another object; [0094] determine the compatibility/fit between such objects and/or arrange them accordingly. For example, if a user attempts to place a virtual end table and a virtual lamp in the same virtual location, system 100 and/or mobile computing device 102 may be configured to arrange the virtual end table on the floor of the virtual space, and/or arrange the virtual lamp on top of the virtual end table; [0168-0171] discussing mounting points that define 3D locations and bounds of the 3D model, wherein placement rules of 3D models within the 3D virtual environment take mounting points into consideration… see [0107] “Add to cart” tool 514 may enable the user to add the product 504 to a virtual shopping cart so the user can purchase the selected product).
Regarding Claim 3, Besecker and Zia teach the limitations of claim 2.
Besecker further discloses wherein the processor is configured to: ask the user a condition related to a location for use of the product ([0116] The “view in AR room” tool 702 may be activated when one or more products have been added to the user's list of AR-enabled images in order to access an AR-enabled view of the product 504 in a physical environment; [0117] product 504 is added to the virtual environment after selecting the “view in AR room” tool 702… the user may select a 2D image portraying product 504 and the application may determine visible properties of product 504 or features of the AR-enabled environment; [0125] the 2D image may be selected from an assortment supplied by an external resource database, or a trending list of room arrangements provided by a social media site); and
in accordance with the condition related to the location for use which is related to the condition specified in response to the asking, exert control to display the information about the product ([0113] The “view in AR room” tool 702 may indicate one or more product images that are AR-enabled images available to place in an AR-enabled environment associated with the user's physical environment; [0125] the selected 2D image of the desired layout may be processed by image analyzer 110 to extract arrangement and/or décor styling information, for example, to guide the (e.g., automatic) selection and/or placement of 3D models within the 3D virtual environment(s)… see [0120] describing the user specifying the priority of placement).
Regarding Claim 4, Besecker and Zia teach the limitations of claim 2.
Besecker further discloses wherein the processor is configured to: ask the user a condition related to a size of the product and a size of an area which serves as a location for use of the product (([0116] The “view in AR room” tool 702 may be activated when one or more products have been added to the user's list of AR-enabled images in order to access an AR-enabled view of the product 504 in a physical environment; [0117] product 504 is added to the virtual environment after selecting the “view in AR room” tool 702… the user may select a 2D image portraying product 504 and the application may determine visible properties of product 504 or features of the AR-enabled environment; [0122] Web page 704 may illustrate the AR mode. If a user views a room (e.g., while viewing an AR-enabled image of product 504 in the room), the application can also scan the room for configuration, dimensions, structural features (e.g., doors, windows and/or other features), and other room elements); and
in accordance with the size of the area which is related to the condition specified in response to the asking, exert control to display the information about the product ([0161] additional AR-enabled images of products can be added to the virtual environment if space is available; [0162] the size of the products corresponding with the inspiration image may be determined based on the amount of space available in the user’s room dimensions and rules defining fit and location… the brown couch corresponding with the inspiration image may be selected in a five-foot frame rather than a ten-foot frame to correspond with the amount of wall space the user has available in the AR-generated room).
Regarding Claim 7, Besecker and Zia teach the limitations of claim 1.
Besecker further discloses wherein the processor is configured to: when a plurality of articles are identified from the image, present the identified articles to the user ([0133] an inspiration image 810 may be received and one or more products may be identified in the image, and subsequently manually or automatically added via the “add to AR list” tool; [0134] applicable to photos of multiple products); and
specify, as a target of display of the information useful for arrangement, an article selected by the user from the presented articles ([0134] The system may be preconfigured to enable individual ones of the multiple products in the photo to have a corresponding AR-enabled image of the product; [0135] the system may be preconfigured to enable individual ones of the multiple products in the photo to have a corresponding 3D model of the product. The individual or set of products can be displayed in an AR-enabled view in the room).
Regarding Claim 8, Besecker and Zia teach the limitations of claim 1.
Besecker further discloses wherein the arrangement-related information includes information about a method of arranging the article ([0125] the 2D image may be selected from a trending list of room arrangements provided by a social media website. In one or more scenarios, the selected 2D image of the desired layout may be processed by image analyzer 110 to extract arrangement and/or décor styling information, for example, to guide the (e.g., automatic) selection and/or placement of 3D models within the 3D virtual environment(s)).
Claim 9 recites a system comprising substantially similar limitations as claim 8. The claim is rejected under substantially similar grounds as claim 8.
Claim 12 is directed to a non-transitory computer readable medium. Claim 12 recites limitations that are substantially parallel in nature to those addressed above for claim 1 which is directed towards a method. The system of Besecker/Zia teaches the limitations of claim 1 as noted above. Besecker further discloses A non-transitory computer readable medium storing a program causing a computer to execute a process (Besecker: [0064][0067]). Claim 12 is therefore rejected for the reasons set forth above in claim 1 and in this paragraph.
Claim 13 is directed to a computer program product. Claim 13 recites limitations that are substantially parallel in nature to those addressed above for claim 1 which is directed towards a method. The system of Besecker/Zia teaches the limitations of claim 1 as noted above. Besecker further discloses An information processing method (Besecker: [0083][0085]). Claim 13 is therefore rejected for the reasons set forth above in claim 1 and in this paragraph.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Besecker in view of Zia, and further in view of previously cited Lee (US 2024/0221049 A1).
Regarding Claim 5, Besecker and Zia teach the limitations of claim 2.
Besecker further discloses wherein the processor is configured to: ask the user a condition related to the product and a location for use of the product ([0116] The “view in AR room” tool 702 may be activated when one or more products have been added to the user's list of AR-enabled images in order to access an AR-enabled view of the product 504 in a physical environment; [0117] product 504 is added to the virtual environment after selecting the “view in AR room” tool 702… the user may select a 2D image portraying product 504 and the application may determine visible properties of product 504 or features of the AR-enabled environment; [0125] the 2D image may be selected from an assortment supplied by an external resource database, or a trending list of room arrangements provided by a social media site); and
in accordance with the location for use which is related to the condition specified in response to the asking, exert control to display the information about the product ([0113] The “view in AR room” tool 702 may indicate one or more product images that are AR-enabled images available to place in an AR-enabled environment associated with the user's physical environment; [0125] the selected 2D image of the desired layout may be processed by image analyzer 110 to extract arrangement and/or décor styling information, for example, to guide the (e.g., automatic) selection and/or placement of 3D models within the 3D virtual environment(s)… see [0120] describing the user specifying the priority of placement);
But does not explicitly disclose determining a condition related to a difference in hue between the product and a location for use of the product; and in accordance with the condition related to the difference in hue from the location for use, exert control to display the information about the product.
Lee, on the other hand, teaches determining a condition related to a difference in hue between the product and a location for use of the product ([0058] there are special features peculiar for each interior decoration style (e.g., color) and detailed arrangement of furniture (e.g., position of the ceiling and the floor). Therefore, differences in colors, atmosphere, etc. can be detected based on image generation models according to each style, and features peculiar to a location (e.g., desk or bed) can emerge from image generation models according to each location… making it possible to generate the most precise interior decoration design image); and
in accordance with the condition related to the difference in hue from the location for use, exert control to display the information about the product ([0052] offering recommendation on interior decoration using GANs; [0058] differences in colors, atmosphere, etc. can be detected based on image generation models according to each style, and features peculiar to a location (e.g., desk or bed) can emerge from image generation models according to each location… making it possible to generate the most precise interior decoration design image; [0059] when the user inputs a desired image using StarGANs, the image can be converted into images in various styles. The conversion of the image into images in various styles can be realized by one model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Besecker and Zia, determining a condition related to a difference in hue between the product and a location for use of the product; and in accordance with the condition related to the difference in hue from the location for use, exert control to display the information about the product, as taught by Lee, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Besecker and Zia, to include the teachings of Lee, in order to satisfy the growing range of customer requirements for interior decoration and choose an appropriate design for the target space, as well as reduce time spent on conventional interior decoration methods (Lee, [0004]).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Besecker in view of Zia, and further in view of previously cited Lim (US 2015/0213541 A1).
Regarding Claim 6, Besecker and Zia teach the limitations of claim 2.
Besecker further discloses wherein the information about the product used to arrange the article is information useful for goods-for-arrangement used to arrange the identified article ([0092] system 100 and/or mobile computing device 102 may be configured to render “assemblies,” which are objects mounted on other objects and/or arranged into some kind of layout… system 100 and/or mobile computing device 102 may be configured to allow for the use of rules and/or metadata defining fit, location, and/or compatibility, which may provide the ability to mount an object on another object; [0094] determine the compatibility/fit between such objects and/or arrange them accordingly. For example, if a user attempts to place a virtual end table and a virtual lamp in the same virtual location, system 100 and/or mobile computing device 102 may be configured to arrange the virtual end table on the floor of the virtual space, and/or arrange the virtual lamp on top of the virtual end table);
But does not explicitly disclose wherein the information is information useful for making, by the user themselves, goods-for-arrangement used to arrange the identified article.
Lim, on the other hand, teaches wherein the information is information useful for making, by the user themselves, goods-for-arrangement used to arrange the identified article ([0033] access or view the online platform 8 and order one or more of the items 10a, 10b, 10c and 10d listed on the online platform 8 through the network 5. The customers 9b and 9c may click any one of "View More" buttons 14a, 14b, 14c and 14d to view more information about the corresponding item 10a, 10b, 10c or 10d, including details of the corresponding flower product (containing, e.g., an image (or picture), size, and weight of each piece of the corresponding unassembled kit to be assembled by a customer), video instruction, and suggestions; [0034] after receiving the purchase order, the vendor 9a may submit a first request associated with an unassembled kit (or a do-it-yourself (DIY) kit) for the item 1a, wherein the unassembled kit 22 is associated with the product, which comes with an instruction manual 26)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Besecker and Zia, wherein the information is information useful for making, by the user themselves, goods-for-arrangement used to arrange the identified article, as taught by Lim, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Besecker and Zia, to include the teachings of Lim, in order to provide a customer with an easily assembled DIY kit for arranging a flower product via online shopping methods, such that the customer can follow DIY instructions (Lim, [0003-0006][0044]).
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Besecker in view of Zia, and further in view of previously cited Kunikyo (US 2022/0108383 A1).
Regarding Claim 10, Besecker and Zia teach the limitations of claim 8.
Besecker further discloses wherein the processor is configured to: provide, to the user, the arrangement-related information, on a basis of specifying either one or both of the identified article and a product used to arrange the article ([0116] The “view in AR room” tool 702 may be activated when one or more products have been added to the user's list of AR-enabled images (e.g., associating a product identifier); [0117] the user may select a 2D image portraying product 504 and the application may determine one or more visible properties of product 504; [0121] the user can add a product 504 to a virtual environment after selecting the “view in AR room” tool 702 and selecting a 3D image portraying a product… see [0092] system 100 and/or mobile computing device 102 may be configured to render “assemblies,” which are objects mounted on other objects and/or arranged into some kind of layout… system 100 and/or mobile computing device 102 may be configured to allow for the use of rules and/or metadata defining fit, location, and/or compatibility, which may provide the ability to mount an object on another object);
But does not explicitly disclose providing a result from a Web search as the arrangement-related information, the Web search being performed on a basis of a word for specifying either one or both of the identified article and a product used to arrange the article.
Kunikyo, on the other hand, teaches providing a result from a Web search as the arrangement-related information, the Web search being performed on a basis of a word for specifying either one or both of the identified article and a product used to arrange the article ([0080] Referring FIG. 2, a closet A1 is shown as a location whose image is to be captured by the terminal 20; [0082] The measurement data may indicate the size of an area in which the user of the terminal 20 wants to place a product… the measurement data may be acquired by an application for searching for a product; [0085] “furniture” is posted as a search word for a product that the user wants to purchase, in addition to the measurement data; [0086] a product database (DB) D71 contained in the content information D7 in the server 10 is searched for a list of products that match the search information, and the list of products is displayed on the display 28 of the terminal 20… see [0127] a shopping server 50 maintains the product database).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the system, as taught by Besecker and Zia, providing a result from a Web search as the arrangement-related information, the Web search being performed on a basis of a word for specifying either one or both of the identified article and a product used to arrange the article, as taught by Kunikyo, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Besecker and Zia, to include the teachings of Kunikyo, in order to improve art product purchasing systems by reducing erroneous determination of matching products with ordered products (Kunikyo, [0003]).
Claim 11 recites a system comprising substantially similar limitations as claim 10. The claim is rejected under substantially similar grounds as claim 10.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY R DONAHUE whose telephone number is (571)272-5850. The examiner can normally be reached M-F 8a-5p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZACHARY RYAN DONAHUE/Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689