java 未找到方法拟合ALSModel

qc6wkl3g  于 2023-05-27  发布在  Java
关注(0)|答案(1)|浏览(172)

我是spark的新手,我想从数据库中读取数据并向特定用户推荐产品。我发现了以下代码

public class CollaborativeFiltering {
public static void main(String[] args) {
    // Step 1: Set up Spark environment
    SparkSession spark = SparkSession.builder()
            .appName("CollaborativeFiltering")
            .master("local[*]")
            .getOrCreate();

    // Step 2: Configure database connection and load ratings data into DataFrame
    String url = "jdbc:mysql://localhost:3306/your_database"; // Replace with your database URL
    String table = "ratings"; // Replace with your table name
    String user = "your_username"; // Replace with your database username
    String password = "your_password"; // Replace with your database password
    
    DataFrameReader reader = spark.read().format("jdbc");
    Dataset<Row> ratingsDF = reader.option("url", url)
            .option("dbtable", table)
            .option("user", user)
            .option("password", password)
            .load();

    // Step 3: Prepare data for collaborative filtering
    Dataset<Row> preparedData = ratingsDF.withColumnRenamed("user_id", "userId")
            .withColumnRenamed("product_id", "itemId");

    // Step 4: Build collaborative filtering model
    ALS als = new ALS();

    // Set the required parameters
    als.setUserCol("userId");
    als.setItemCol("itemId");
    als.setRatingCol("rating");

    // Set additional optional parameters
    als.setRank(10); // Set the number of latent factors
    als.setMaxIter(10); // Set the maximum number of iterations
    als.setRegParam(0.01); // Set the regularization parameter

    ALSModel model = als.fit(preparedData);

    // Step 5: Generate recommendations for a specific user
    int userId = 123; // Replace with the desired user ID
    Dataset<Row> userRecommendations = model.recommendForUserSubset(spark.createDataset(Collections.singletonList(userId), Encoders.INT), 5); // Get top 5 recommendations

    // Print the recommendations
    userRecommendations.show(false);
    
    // Stop the Spark session
    spark.stop();
}}

但是方法setMaxIter、setRegParam和fit未被找到。请帮帮忙。PS:我用的是spark版本3.3.0和scala版本2.13,我试过其他版本,但总是同样的问题。

acruukt9

acruukt91#

通过将版本更改为这些解决了问题

<scala.version>2.12</scala.version> 
<spark.version>3.2.0</spark.version>

相关问题